Archive for the 'microfoundations' Category

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Hayek and the Lucas Critique

In March I wrote a blog post, “Robert Lucas and the Pretense of Science,” which was a draft proposal for a paper for a conference on Coordination Issues in Historical Perspectives to be held in September. My proposal having been accepted I’m going to post sections of the paper on the blog in hopes of getting some feedback as a write the paper. What follows is the first of several anticipated draft sections.

Just 31 years old, F. A. Hayek rose rapidly to stardom after giving four lectures at the London School of Economics at the invitation of his almost exact contemporary, and soon to be best friend, Lionel Robbins. Hayek had already published several important works, of which Hayek ([1928], 1984) laying out basic conceptualization of an intertemporal equilibrium almost simultaneously with the similar conceptualizations of two young Swedish economists, Gunnar Myrdal (1927) and Erik Lindahl [1929] 1939), was the most important.

Hayek’s (1931a) LSE lectures aimed to provide a policy-relevant version of a specific theoretical model of the business cycle that drew upon but was a just a particular instantiation of the general conceptualization developed in his 1928 contribution. Delivered less than two years after the start of the Great Depression, Hayek’s lectures gave a historical overview of the monetary theory of business-cycles, an account of how monetary disturbances cause real effects, and a skeptical discussion of how monetary policy might, or more likely might not, counteract or mitigate the downturn then underway. It was Hayek’s skepticism about countercyclical policy that helped make those lectures so compelling but also elicited such a hostile reaction during the unfolding crisis.

The extraordinary success of his lectures established Hayek’s reputation as a preeminent monetary theorist alongside established figures like Irving Fisher, A. C. Pigou, D. H. Robertson, R. G. Hawtrey, and of course J. M. Keynes. Hayek’s (1931b) critical review of Keynes’s just published Treatise on Money (1930), published soon after his LSE lectures, provoking a heated exchange with Keynes, himself, showed him to be a skilled debater and a powerful polemicist.

Hayek’s meteoric rise was, however, followed by a rapid fall from the briefly held pinnacle of his early career. Aside from the imperfections and weaknesses of his own theoretical framework (Glasner and Zimmerman 2021), his diagnosis of the causes of the Great Depression (Glasner and Batchelder [1994] 2021a, 2021b) and his policy advice (Glasner 2021) were theoretically misguided and inappropriate to the deflationary conditions underlying the Great Depression).

Nevertheless, Hayek’s conceptualization of intertemporal equilibrium provided insight into the role not only of prices, but also of price expectations, in accounting for cyclical fluctuations. In Hayek’s 1931 version of his cycle theory, the upturn results from bank-financed investment spending enabled by monetary expansion that fuels an economic boom characterized by increased total spending, output and employment. However, owing to resource constraints, misalignments between demand and supply, and drains of bank reserves, the optimistic expectations engendered by the boom are doomed to eventual disappointment, whereupon a downturn begins.

I need not engage here with the substance of Hayek’s cycle theory which I have criticized elsewhere (see references above). But I would like to consider his 1934 explanation, responding to Hansen and Tout (1933), of why a permanent monetary expansion would be impossible. Hansen and Tout disputed Hayek’s contention that monetary expansion would inevitably lead to a recession, because an unconstrained monetary authority would not be forced by a reserve drain to halt a monetary expansion, allowing a boom to continue indefinitely, permanently maintaining an excess of investment over saving.

Hayek (1934) responded as follows:

[A] constant rate of forced saving (i.e., investment in excess of voluntary saving) a rate of credit expansion which will enable the producers of intermediate products, during each successive unit of time, to compete successfully with the producers of consumers’ goods for constant additional quantities of the original factors of production. But as the competing demand from the producers of consumers’ goods rises (in terms of money) in consequence of, and in proportion to, the preceding increase of expenditure on the factors of production (income), an increase of credit which is to enable the producers of intermediate products to attract additional original factors, will have to be, not only absolutely but even relatively, greater than the last increase which is now reflected in the increased demand for consumers’ goods. Even in order to attract only as great a proportion of the original factors, i.e., in order merely to maintain the already existing capital, every new increase would have to be proportional to the last increase, i.e., credit would have to expand progressively at a constant rate. But in order to bring about constant additions to capital, it would have to do more: it would have to increase at a constantly increasing rate. The rate at which this rate of increase must increase would be dependent upon the time lag between the first expenditure of the additional money on the factors of production and the re-expenditure of the income so created on consumers’ goods. . . .

But I think it can be shown . . . that . . . such a policy would . . . inevitably lead to a rapid and progressive rise in prices which, in addition to its other undesirable effects, would set up movements which would soon counteract, and finally more than offset, the “forced saving.” That it is impossible, either for a simple progressive increase of credit which only helps to maintain, and does not add to, the already existing “forced saving,” or for an increase in credit at an increasing rate, to continue for a considerable time without causing a rise in prices, results from the fact that in neither case have we reason to assume that the increase in the supply of consumers’ goods will keep pace with the increase in the flow of money coming on to the market for consumers’ goods. Insofar as, in the second case, the credit expansion leads to an ultimate increase in the output of consumers’ goods, this increase will lag considerably and increasingly (as the period of production increases) behind the increase in the demand for them. But whether the prices of consumers’ goods will rise faster or slower, all other prices, and particularly the prices of the original factors of production, will rise even faster. It is only a question of time when this general and progressive rise of prices becomes very rapid. My argument is not that such a development is inevitable once a policy of credit expansion is embarked upon, but that it has to be carried to that point if a certain result—a constant rate of forced saving, or maintenance without the help of voluntary saving of capital accumulated by forced saving—is to be achieved.

Friedman’s (1968) argument why monetary expansion could not permanently reduce unemployment below its “natural rate” closely mirrors (though he almost certainly never read) Hayek’s argument that monetary expansion could not permanently maintain a rate of investment spending above the rate of voluntary saving. Generalizing Friedman’s logic, Lucas (1976) transformed it into a critique of using econometric estimates of relationships like the Phillips Curve, the specific target of Friedman’s argument, as a basis for predicting the effects of policy changes, such estimates being conditional on implicit expectational assumptions which aren’t invariant to the policy changes derived from those estimates.

Restated differently, such econometric estimates are reduced forms that, without identifying restrictions, do not allow the estimated regression coefficients to be used to predict the effects of a policy change.

Only by specifying, and estimating, the deep structural relationships governing the response to a policy change could the effect of a potential policy change be predicted with some confidence that the prediction would not prove erroneous because of changes in the econometrically estimated relationships once agents altered their behavior in response to the policy change.

In his 1974 Nobel Lecture, Hayek offered a similar explanation of why an observed correlation between aggregate demand and employment provides no basis for predicting the effect of policies aimed at increasing aggregate demand and reducing unemployment if the likely changes in structural relationships caused by those policies are not taken into account.

[T]he very measures which the dominant “macro-economic” theory has recommended as a remedy for unemployment, namely the increase of aggregate demand, have become a cause of a very extensive misallocation of resources which is likely to make later large-scale unemployment inevitable. The continuous injection . . . money at points of the economic system where it creates a temporary demand which must cease when the increase of the quantity of money stops or slows down, together with the expectation of a continuing rise of prices, draws labour . . . into employments which can last only so long as the increase of the quantity of money continues at the same rate – or perhaps even only so long as it continues to accelerate at a given rate. What this policy has produced is not so much a level of employment that could not have been brought about in other ways, as a distribution of employment which cannot be indefinitely maintained . . . The fact is that by a mistaken theoretical view we have been led into a precarious position in which we cannot prevent substantial unemployment from re-appearing; not because . . . this unemployment is deliberately brought about as a means to combat inflation, but because it is now bound to occur as a deeply regrettable but inescapable consequence of the mistaken policies of the past as soon as inflation ceases to accelerate.

Hayek’s point that an observed correlation between the rate of inflation (a proxy for aggregate demand) and unemployment cannot be relied on in making economic policy was articulated succinctly and abstractly by Lucas as follows:

In short, one can imagine situations in which empirical Phillips curves exhibit long lags and situations in which there are no lagged effects. In either case, the “long-run” output inflation relationship as calculated or simulated in the conventional way has no bearing on the actual consequences of pursing a policy of inflation.

[T]he ability . . . to forecast consequences of a change in policy rests crucially on the assumption that the parameters describing the new policy . . . are known by agents. Over periods for which this assumption is not approximately valid . . . empirical Phillips curves will appear subject to “parameter drift,” describable over the sample period, but unpredictable for all but the very near future.

The lesson inferred by both Hayek and Lucas was that Keynesian macroeconomic models of aggregate demand, inflation and employment can’t reliably guide economic policy and should be discarded in favor of models more securely grounded in the microeconomic theories of supply and demand that emerged from the Marginal Revolution of the 1870s and eventually becoming the neoclassical economic theory that describes the characteristics of an efficient, decentralized and self-regulating economic system. This was the microeconomic basis on which Hayek and Lucas believed macroeconomic theory ought to be based instead of the Keynesian system that they were criticizing. But that superficial similarity obscures the profound methodological and substantive differences between them.

Those differences will be considered in future posts.

References

Friedman, M. 1968. “The Role of Monetary Policy.” American Economic Review 58(1):1-17.

Glasner, D. 2021. “Hayek, Deflation, Gold and Nihilism.” Ch. 16 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. [1994] 2021. “Debt, Deflation, the Gold Standard and the Great Depression.” Ch. 13 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. 2021. “Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel?” Ch. 14 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Zimmerman, P. 2021.  “The Sraffa-Hayek Debate on the Natural Rate of Interest.” Ch. 15 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Hansen, A. and Tout, H. 1933. “Annual Survey of Business Cycle Theory: Investment and Saving in Business Cycle Theory,” Econometrica 1(2): 119-47.

Hayek, F. A. [1928] 1984. “Intertemporal Price Equilibrium and Movements in the Value of Money.” In R. McCloughry (Ed.), Money, Capital and Fluctuations: Early Essays (pp. 171–215). Routledge.

Hayek, F. A. 1931a. Prices and Produciton. London: Macmillan.

Hayek, F. A. 1931b. “Reflections on the Pure Theory of Money of Mr. Keynes.” Economica 33:270-95.

Hayek, F. A. 1934. “Capital and Industrial Fluctuations.” Econometrica 2(2): 152-67.

Keynes, J. M. 1930. A Treatise on Money. 2 vols. London: Macmillan.

Lindahl. E. [1929] 1939. “The Place of Capital in the Theory of Price.” In E. Lindahl, Studies in the Theory of Money and Capital. George, Allen & Unwin.

Lucas, R. E. [1976] 1985. “Econometric Policy Evaluation: A Critique.” In R. E. Lucas, Studies in Business-Cycle Theory. Cambridge: MIT Press.

Myrdal, G. 1927. Prisbildningsproblemet och Foranderligheten (Price Formation and the Change Factor). Almqvist & Wicksell.

On the Labor Supply Function

The bread and butter of economics is demand and supply. The basic idea of a demand function (or a demand curve) is to describe a relationship between the price at which a given product, commodity or service can be bought and the quantity that will bought by some individual. The standard assumption is that the quantity demanded increases as the price falls, so that the demand curve is downward-sloping, but not much more can be said about the shape of a demand curve unless special assumptions are made about the individual’s preferences.

Demand curves aren’t natural phenomena with concrete existence; they are hypothetical or notional constructs pertaining to individual preferences. To pass from individual demands to a market demand for a product, commodity or service requires another conceptual process summing the quantities demanded by each individual at any given price. The conceptual process is never actually performed, so the downward-sloping market demand curve is just presumed, not observed as a fact of nature.

The summation process required to pass from individual demands to a market demand implies that the quantity demanded at any price is the quantity demanded when each individual pays exactly the same price that every other demander pays. At a price of $10/widget, the widget demand curve tells us how many widgets would be purchased if every purchaser in the market can buy as much as desired at $10/widget. If some customers can buy at $10/widget while others have to pay $20/widget or some can’t buy any widgets at any price, then the quantity of widgets actually bought will not equal the quantity on the hypothetical widget demand curve corresponding to $10/widget.

Similar reasoning underlies the supply function or supply curve for any product, commodity or service. The market supply curve is built up from the preferences and costs of individuals and firms and represents the amount of a product, commodity or service that would be willing to offer for sale at different prices. The market supply curve is the result of a conceptual summation process that adds up the amounts that would be hypothetically be offered for sale by every agent at different prices.

The point of this pedantry is to emphasize the that the demand and supply curves we use are drawn on the assumption that a single uniform market price prevails in every market and that all demanders and suppliers can trade without limit at those prices and their trading plans are fully executed. This is the equilibrium paradigm underlying the supply-demand analysis of econ 101.

Economists quite unself-consciously deploy supply-demand concepts to analyze labor markets in a variety of settings. Sometimes, if the labor market under analysis is limited to a particular trade or a particular skill or a particular geographic area, the supply-demand framework is reasonable and appropriate. But when applied to the aggregate labor market of the whole economy, the supply-demand framework is inappropriate, because the ceteris-paribus proviso (all prices other than the price of the product, commodity or service in question are held constant) attached to every supply-demand model is obviously violated.

Thoughtlessly applying a simple supply-demand model to analyze the labor market of an entire economy leads to the conclusion that widespread unemployment, when some workers are unemployed, but would have accepted employment offers at wages that comparably skilled workers are actually receiving, implies that wages are above the market-clearing wage level consistent with full employment.

The attached diagram for simplest version of this analysis. The market wage (W1) is higher than the equilibrium wage (We) at which all workers willing to accept that wage could be employed. The difference between the number of workers seeking employment at the market wage (LS) and the number of workers that employers seek to hire (LD) measures the amount of unemployment. According to this analysis, unemployment would be eliminated if the market wage fell from W1 to We.

Applying supply-demand analysis to aggregate unemployment fails on two levels. First, workers clearly are unable to execute their plans to offer their labor services at the wage at which other workers are employed, so individual workers are off their supply curves. Second, it is impossible to assume, supply-demand analysis requires, that all other prices and incomes remain constant so that the demand and supply curves do not move as wages and employment change. When multiple variables are mutually interdependent and simultaneously determined, the analysis of just two variables (wages and employment) cannot be isolated from the rest of the system. Focusing on the wage as the variable that needs to change to restore full employment is an example of the tunnel vision.

Keynes rejected the idea that economy-wide unemployment could be eliminated by cutting wages. Although Keynes’s argument against wage cuts as a cure for unemployment was flawed, he did have at least an intuitive grasp of the basic weakness in the argument for wage cuts: that high aggregate unemployment is not usefully analyzed as a symptom of excessive wages. To explain why wage cuts aren’t the cure for high unemployment, Keynes introduced a distinction between voluntary and involuntary unemployment.

Forty years later, Robert Lucas began his effort — not the first such effort, but by far the most successful — to discredit the concept of involuntary unemployment. Here’s an early example:

Keynes [hypothesized] that measured unemployment can be decomposed into two distinct components: ‘voluntary’ (or frictional) and ‘involuntary’, with full employment then identified as the level prevailing when involuntary employment equals zero. It seems appropriate, then, to begin by reviewing Keynes’ reasons for introducing this distinction in the first place. . . .

Accepting the necessity of a distinction between explanations for normal and cyclical unemployment does not, however, compel one to identify the first as voluntary and the second as involuntary, as Keynes goes on to do. This terminology suggests that the key to the distinction lies in some difference in the way two different types of unemployment are perceived by workers. Now in the first place, the distinction we are after concerns sources of unemployment, not differentiated types. . . .[O]ne may classify motives for holding money without imagining that anyone can subdivide his own cash holdings into “transactions balances,” “precautionary balances”, and so forth. The recognition that one needs to distinguish among sources of unemployment does not in any way imply that one needs to distinguish among types.

Nor is there any evident reason why one would want to draw this distinction. Certainly the more one thinks about the decision problem facing individual workers and firms the less sense this distinction makes. The worker who loses a good job in prosperous time does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesirable capital loss. Nevertheless, the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

Lucas, Studies in Business Cycle Theory, pp. 241-43

Consider this revision of Lucas’s argument:

The expressway driver who is slowed down in a traffic jam does not volunteer to be in this situation; he has suffered a waste of his time. Nevertheless, the driver can get off the expressway at the next exit to find an alternate route. Thus, there is an involuntary element in every traffic jam, in the sense that no one chooses to waste time; there is also a voluntary element in all traffic jams, in the sense that however stuck one is in traffic, one can always take the next exit on the expressway.

What is lost on Lucas is that, for an individual worker, taking a wage cut to avoid being laid off by the employer accomplishes nothing, because the willingness of a single worker to accept a wage cut would not induce the employer to increase output and employment. Unless all workers agreed to take wage cuts, a wage cut to one employee would have not cause the employer to reconsider its plan to reduce in the face of declining demand for its product. Only the collective offer of all workers to accept a wage cut would induce an output response by the employer and a decision not to lay off part of its work force.

But even a collective offer by all workers to accept a wage cut would be unlikely to avoid an output reduction and layoffs. Consider a simple case in which the demand for the employer’s output declines by a third. Suppose the employer’s marginal cost of output is half the selling price (implying a demand elasticity of -2). Assume that demand is linear. With no change in its marginal cost, the firm would reduce output by a third, presumably laying off up to a third of its employees. Could workers avoid the layoffs by accepting lower wages to enable the firm to reduce its price? Or asked in another way, how much would marginal cost have to fall for the firm not to reduce output after the demand reduction?

Working out the algebra, one finds that for the firm to keep producing as much after a one-third reduction in demand, the firm’s marginal cost would have to fall by two-thirds, a decline that could only be achieved by a radical reduction in labor costs. This is surely an oversimplified view of the alternatives available to workers and employers, but the point is that workers facing a layoff after the demand for the product they produce have almost no ability to remain employed even by collectively accepting a wage cut.

That conclusion applies a fortiori when decisions whether to accept a wage cut are left to individual workers, because the willingness of workers individually to accept a wage cut is irrelevant to their chances of retaining their jobs. Being laid off because of decline in the demand for the product a worker is producing is a much different situation from being laid off, because a worker’s employer is shifting to a new technology for which the workers lack the requisite skills, and can remain employed only by accepting re-assignment to a lower-paying job.

Let’s follow Lucas a bit further:

Keynes, in chapter 2, deals with the situation facing an individual unemployed worker by evasion and wordplay only. Sentences like “more labor would, as a rule, be forthcoming at the existing money wage if it were demanded” are used again and again as though, from the point of view of a jobless worker, it is unambiguous what is meant by “the existing money wage.” Unless we define an individual’s wage rate as the price someone else is willing to pay him for his labor (in which case Keynes’s assertion is defined to be false to be false), what is it?

Lucas, Id.

I must admit that, reading this passage again perhaps 30 or more years after my first reading, I’m astonished that I could have once read it without astonishment. Lucas gives the game away by accusing Keynes of engaging in evasion and wordplay before embarking himself on sustained evasion and wordplay. The meaning of the “existing money wage” is hardly ambiguous, it is the money wage the unemployed worker was receiving before losing his job and the wage that his fellow workers, who remain employed, continue to receive.

Is Lucas suggesting that the reason that the worker lost his job while his fellow workers who did not lose theirs is that the value of his marginal product fell but the value of his co-workers’ marginal product did not? Perhaps, but that would only add to my astonishment. At the current wage, employers had to reduce the number of workers until their marginal product was high enough for the employer to continue employing them. That was not necessarily, and certainly not primarily, because some workers were more capable than those that were laid off.

The fact is, I think, that Keynes wanted to get labor markets out of the way in chapter 2 so that he could get on to the demand theory which really interested him.

More wordplay. Is it fact or opinion? Well, he says that thinks it’s a fact. In other words, it’s really an opinion.

This is surely understandable, but what is the excuse for letting his carelessly drawn distinction between voluntary and involuntary unemployment dominate aggregative thinking on labor markets for the forty years following?

Mr. Keynes, really, what is your excuse for being such an awful human being?

[I]nvoluntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to ‘explain’ the theoretical constructs of our predecessor, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Lucas, Id.

Let’s rewrite this paragraph with a few strategic word substitutions:

Heliocentrism is not a fact or phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Copernicus introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon the observed movement of the planets in the heavens. Is it the task of modern theoretical physics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Copernicus died in 1542 shortly before his work on heliocentrism was published. Galileo’s works on heliocentrism were not published until 1610 almost 70 years after Copernicus published his work. So, under Lucas’s forty-year time limit, Galileo had no business trying to explain Copernican heliocentrism which had still not yet proven fruitful. Moreover, even after Galileo had published his works, geocentric models were providing predictions of planetary motion as good as, if not better than, the heliocentric models, so decisive empirical evidence in favor of heliocentrism was still lacking. Not until Newton published his great work 70 years after Galileo, and 140 years after Copernicus, was heliocentrism finally accepted as fact.

In summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problem they face. One cannot, even conceptually, arrive at a usable definition of full employment

Lucas, Id.

Belying his claim to be introducing scientific rigor into macroeocnomics, Lucas restorts to an extended scholastic inquiry into whether an unemployed worker can really ever be unemployed involuntarily. Based on his scholastic inquiry into the nature of volunatriness, Lucas declares that Keynes was mistaken because would not accept the discipline of optimization and equilibrium. But Lucas’s insistence on the discipline of optimization and equilibrium is misplaced unless he can provide an actual mechanism whereby the notional optimization of a single agent can be reconciled with notional optimization of other individuals.

It was his inability to provide any explanation of the mechanism whereby the notional optimization of individual agents can be reconciled with the notional optimizations of other individual agents that led Lucas to resort to rational expectations to circumvent the need for such a mechanism. He successfully persuaded the economics profession that evading the need to explain such a reconciliation mechanism, the profession would not be shirking their explanatory duty, but would merely be fulfilling their methodological obligation to uphold the neoclassical axioms of rationality and optimization neatly subsumed under the heading of microfoundations.

Rational expectations and microfoundations provided the pretext that could justify or at least excuse the absence of any explanation of how an equilibrium is reached and maintained by assuming that the rational expectations assumption is an adequate substitute for the Walrasian auctioneer, so that each and every agent, using the common knowledge (and only the common knowledge) available to all agents, would reliably anticipate the equilibrium price vector prevailing throughout their infinite lives, thereby guaranteeing continuous equilibrium and consistency of all optimal plans. That feat having been securely accomplished, it was but a small and convenient step to collapse the multitude of individual agents into a single representative agent, so that the virtue of submitting to the discipline of optimization could find its just and fitting reward.

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.

General Equilibrium, Partial Equilibrium and Costs

Neoclassical economics is now bifurcated between Marshallian partial-equilibrium and Walrasian general-equilibrium analyses. With the apparent inability of neoclassical theory to explain the coordination failure of the Great Depression, J. M. Keynes proposed an alternative paradigm to explain the involuntary unemployment of the 1930s. But within two decades, Keynes’s contribution was subsumed under what became known as the neoclassical synthesis of the Keynesian and Walrasian theories (about which I have written frequently, e.g., here and here). Lacking microfoundations that could be reconciled with the assumptions of Walrasian general-equilibrium theory, the neoclassical synthesis collapsed, owing to the supposedly inadequate microfoundations of Keynesian theory.

But Walrasian general-equilibrium theory provides no plausible, much less axiomatic, account of how general equilibrium is, or could be, achieved. Even the imaginary tatonnement process lacks an algorithm that guarantees that a general-equilibrium solution, if it exists, would be found. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian partial-equilibrium analysis, with market prices adjusting to equilibrate supply and demand.

Yet modern macroeconomics, despite its explicit Walrasian assumptions, implicitly relies on the Marshallian intuition that the fundamentals of general-equilibrium, prices and costs are known to agents who, except for random disturbances, continuously form rational expectations of market-clearing equilibrium prices in all markets.

I’ve written many earlier posts (e.g., here and here) contesting, in one way or another, the notion that all macroeconomic theories must be founded on first principles (i.e., microeconomic axioms about optimizing individuals). Any macroeconomic theory not appropriately founded on the axioms of individual optimization by consumers and producers is now dismissed as scientifically defective and unworthy of attention by serious scientific practitioners of macroeconomics.

When contesting the presumed necessity for macroeconomics to be microeconomically founded, I’ve often used Marshall’s partial-equilibrium method as a point of reference. Though derived from underlying preference functions that are independent of prices, the demand curves of partial-equilibrium analysis presume that all product prices, except the price of the product under analysis, are held constant. Similarly, the supply curves are derived from individual firm marginal-cost curves whose geometric position or algebraic description depends critically on the prices of raw materials and factors of production used in the production process. But neither the prices of alternative products to be purchased by consumers nor the prices of raw materials and factors of production are given independently of the general-equilibrium solution of the whole system.

Thus, partial-equilibrium analysis, to be analytically defensible, requires a ceteris-paribus proviso. But to be analytically tenable, that proviso must posit an initial position of general equilibrium. Unless the analysis starts from a state of general equilibrium, the assumption that all prices but one remain constant can’t be maintained, the constancy of disequilibrium prices being a nonsensical assumption.

The ceteris-paribus proviso also entails an assumption about the market under analysis; either the market itself, or the disturbance to which it’s subject, must be so small that any change in the equilibrium price of the product in question has de minimus repercussions on the prices of every other product and of every input and factor of production used in producing that product. Thus, the validity of partial-equilibrium analysis depends on the presumption that the unique and locally stable general-equilibrium is approximately undisturbed by whatever changes result from by the posited change in the single market being analyzed. But that presumption is not so self-evidently plausible that our reliance on it to make empirical predictions is always, or even usually, justified.

Perhaps the best argument for taking partial-equilibrium analysis seriously is that the analysis identifies certain deep structural tendencies that, at least under “normal” conditions of moderate macroeconomic stability (i.e., moderate unemployment and reasonable price stability), will usually be observable despite the disturbing influences that are subsumed under the ceteris-paribus proviso. That assumption — an assumption of relative ignorance about the nature of the disturbances that are assumed to be constant — posits that those disturbances are more or less random, and as likely to cause errors in one direction as another. Consequently, the predictions of partial-equilibrium analysis can be assumed to be statistically, though not invariably, correct.

Of course, the more interconnected a given market is with other markets in the economy, and the greater its size relative to the total economy, the less confidence we can have that the implications of partial-equilibrium analysis will be corroborated by empirical investigation.

Despite its frequent unsuitability, economists and commentators are often willing to deploy partial-equilibrium analysis in offering policy advice even when the necessary ceteris-paribus proviso of partial-equilibrium analysis cannot be plausibly upheld. For example, two of the leading theories of the determination of the rate of interest are the loanable-funds doctrine and the Keynesian liquidity-preference theory. Both these theories of the rate of interest suppose that the rate of interest is determined in a single market — either for loanable funds or for cash balances — and that the rate of interest adjusts to equilibrate one or the other of those two markets. But the rate of interest is an economy-wide price whose determination is an intertemporal-general-equilibrium phenomenon that cannot be reduced, as the loanable-funds and liquidity preference theories try to do, to the analysis of a single market.

Similarly partial-equilibrium analysis of the supply of, and the demand for, labor has been used of late to predict changes in wages from immigration and to advocate for changes in immigration policy, while, in an earlier era, it was used to recommend wage reductions as a remedy for persistently high aggregate unemployment. In the General Theory, Keynes correctly criticized those using a naïve version of the partial-equilibrium method to recommend curing high unemployment by cutting wage rates, correctly observing that the conditions for full employment required the satisfaction of certain macroeconomic conditions for equilibrium that would not necessarily be satisfied by cutting wages.

However, in the very same volume, Keynes argued that the rate of interest is determined exclusively by the relationship between the quantity of money and the demand to hold money, ignoring that the rate of interest is an intertemporal relationship between current and expected future prices, an insight earlier explained by Irving Fisher that Keynes himself had expertly deployed in his Tract on Monetary Reform and elsewhere (Chapter 17) in the General Theory itself.

Evidently, the allure of supply-demand analysis can sometimes be too powerful for well-trained economists to resist even when they actually know better themselves that it ought to be resisted.

A further point also requires attention: the conditions necessary for partial-equilibrium analysis to be valid are never really satisfied; firms don’t know the costs that determine the optimal rate of production when they actually must settle on a plan of how much to produce, how much raw materials to buy, and how much labor and other factors of production to employ. Marshall, the originator of partial-equilibrium analysis, analogized supply and demand to the blades of a scissor acting jointly to achieve a intended result.

But Marshall erred in thinking that supply (i.e., cost) is an independent determinant of price, because the equality of costs and prices is a characteristic of general equilibrium. It can be applied to partial-equilibrium analysis only under the ceteris-paribus proviso that situates partial-equilibrium analysis in a pre-existing general equilibrium of the entire economy. It is only in general-equilibrium state, that the cost incurred by a firm in producing its output represents the value of the foregone output that could have been produced had the firm’s output been reduced. Only if the analyzed market is so small that changes in how much firms in that market produce do not affect the prices of the inputs used in to produce that output can definite marginal-cost curves be drawn or algebraically specified.

Unless general equilibrium obtains, prices need not equal costs, as measured by the quantities and prices of inputs used by firms to produce any product. Partial equilibrium analysis is possible only if carried out in the context of general equilibrium. Cost cannot be an independent determinant of prices, because cost is itself determined simultaneously along with all other prices.

But even aside from the reasons why partial-equilibrium analysis presumes that all prices, but the price in the single market being analyzed, are general-equilibrium prices, there’s another, even more problematic, assumption underlying partial-equilibrium analysis: that producers actually know the prices that they will pay for the inputs and resources to be used in producing their outputs. The cost curves of the standard economic analysis of the firm from which the supply curves of partial-equilibrium analysis are derived, presume that the prices of all inputs and factors of production correspond to those that are consistent with general equilibrium. But general-equilibrium prices are never known by anyone except the hypothetical agents in a general-equilibrium model with complete markets, or by agents endowed with perfect foresight (aka rational expectations in the strict sense of that misunderstood term).

At bottom, Marshallian partial-equilibrium analysis is comparative statics: a comparison of two alternative (hypothetical) equilibria distinguished by some difference in the parameters characterizing the two equilibria. By comparing the equilibria corresponding to the different parameter values, the analyst can infer the effect (at least directionally) of a parameter change.

But comparative-statics analysis is subject to a serious limitation: comparing two alternative hypothetical equilibria is very different from making empirical predictions about the effects of an actual parameter change in real time.

Comparing two alternative equilibria corresponding to different values of a parameter may be suggestive of what could happen after a policy decision to change that parameter, but there are many reasons why the change implied by the comparative-statics exercise might not match or even approximate the actual change.

First, the initial state was almost certainly not an equilibrium state, so systemic changes will be difficult, if not impossible, to disentangle from the effect of parameter change implied by the comparative-statics exercise.

Second, even if the initial state was an equilibrium, the transition to a new equilibrium is never instantaneous. The transitional period therefore leads to changes that in turn induce further systemic changes that cause the new equilibrium toward which the system gravitates to differ from the final equilibrium of the comparative-statics exercise.

Third, each successive change in the final equilibrium toward which the system is gravitating leads to further changes that in turn keep changing the final equilibrium. There is no reason why the successive changes lead to convergence on any final equilibrium end state. Nor is there any theoretical proof that the adjustment path leading from one equilibrium to another ever reaches an equilibrium end state. The gap between the comparative-statics exercise and the theory of adjustment in real time remains unbridged and may, even in principle, be unbridgeable.

Finally, without a complete system of forward and state-contingent markets, equilibrium requires not just that current prices converge to equilibrium prices; it requires that expectations of all agents about future prices converge to equilibrium expectations of future prices. Unless, agents’ expectations of future prices converge to their equilibrium values, an equilibrium many not even exist, let alone be approached or attained.

So the Marshallian assumption that producers know their costs of production and make production and pricing decisions based on that knowledge is both factually wrong and logically untenable. Nor do producers know what the demand curves for their products really looks like, except in the extreme case in which suppliers take market prices to be parametrically determined. But even then, they make decisions not on known prices, but on expected prices. Their expectations are constantly being tested against market information about actual prices, information that causes decision makers to affirm or revise their expectations in light of the constant flow of new information about prices and market conditions.

I don’t reject partial-equilibrium analysis, but I do call attention to its limitations, and to its unsuitability as a supposedly essential foundation for macroeconomic analysis, especially inasmuch as microeconomic analysis, AKA partial-equilibrium analysis, is utterly dependent on the uneasy macrofoundation of general-equilibrium theory. The intuition of Marshallian partial equilibrium cannot fil the gap, long ago noted by Kenneth Arrow, in the neoclassical theory of equilibrium price adjustment.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

Graeber Against Economics

David Graeber’s vitriolic essay “Against Economics” in the New York Review of Books has generated responses from Noah Smith and Scott Sumner among others. I don’t disagree with much that Noah or Scott have to say, but I want to dig a little deeper than they did into some of Graeber’s arguments, because even though I think he is badly misinformed on many if not most of the subjects he writes about, I actually have some sympathy for his dissatisfaction with the current state of economics. Graeber wastes no time on pleasantries.

There is a growing feeling, among those who have the responsibility of managing large economies, that the discipline of economics is no longer fit for purpose. It is beginning to look like a science designed to solve problems that no longer exist.

A serious polemicist should avoid blatant mischaracterizations, exaggerations and cheap shots, and should be well-grounded in the object of his critique, thereby avoiding criticisms that undermine his own claims to expertise. I grant that  Graeber has some valid criticisms to make, even agreeing with him, at least in part, on some of them. But his indiscriminate attacks on, and caricatures of, all neoclassical economics betrays a superficial understanding of that discipline.

Graeber begins by attacking what he considers the misguided and obsessive focus on inflation by economists.

A good example is the obsession with inflation. Economists still teach their students that the primary economic role of government—many would insist, its only really proper economic role—is to guarantee price stability. We must be constantly vigilant over the dangers of inflation. For governments to simply print money is therefore inherently sinful.

Every currency unit, or banknote issued by a central bank, now in circulation, as Graeber must know, has been “printed.” So to say that economists consider it sinful for governments to print money is either a deliberate falsehood, or an emotional rhetorical outburst, as Graeber immediately, and apparently unwittingly, acknowledges!

If, however, inflation is kept at bay through the coordinated action of government and central bankers, the market should find its “natural rate of unemployment,” and investors, taking advantage of clear price signals, should be able to ensure healthy growth. These assumptions came with the monetarism of the 1980s, the idea that government should restrict itself to managing the money supply, and by the 1990s had come to be accepted as such elementary common sense that pretty much all political debate had to set out from a ritual acknowledgment of the perils of government spending. This continues to be the case, despite the fact that, since the 2008 recession, central banks have been printing money frantically [my emphasis] in an attempt to create inflation and compel the rich to do something useful with their money, and have been largely unsuccessful in both endeavors.

Graeber’s use of the ambiguous pronoun “this” beginning the last sentence of the paragraph betrays his own confusion about what he is saying. Central banks are printing money and attempting to “create” inflation while supposedly still believing that inflation is a menace and printing money is a sin. Go figure.

We now live in a different economic universe than we did before the crash. Falling unemployment no longer drives up wages. Printing money does not cause inflation. Yet the language of public debate, and the wisdom conveyed in economic textbooks, remain almost entirely unchanged.

Again showing an inadequate understanding of basic economic theory, Graeber suggests that, in theory if not practice, falling unemployment should cause wages to rise. The Philips Curve, upon which Graeber’s suggestion relies, represents the empirically observed negative correlation between the rate of average wage increase and the rate of unemployment. But correlation does not imply causation, so there is no basis in economic theory to assert that falling unemployment causes the rate of increase in wages to accelerate. That the empirical correlation between unemployment and wage increases has not recently been in evidence provides no compelling reason for changing textbook theory.

From this largely unfounded and attack on economic theory – a theory which I myself consider, in many respects, inadequate and unreliable – Graeber launches a bitter diatribe against the supposed hegemony of economists over policy-making.

Mainstream economists nowadays might not be particularly good at predicting financial crashes, facilitating general prosperity, or coming up with models for preventing climate change, but when it comes to establishing themselves in positions of intellectual authority, unaffected by such failings, their success is unparalleled. One would have to look at the history of religions to find anything like it.

The ability to predict financial crises would be desirable, but that cannot be the sole criterion for whether economics has advanced our understanding of how economic activity is organized or what effects policy changes have. (I note parenthetically that many economists defensively reject the notion that economic crises are predictable on the grounds that if economists could predict a future economic crisis, those predictions would be immediately self-fulfilling. This response, of course, effectively disproves the idea that economists could predict that an economic crisis would occur in the way that astronomers predict solar eclipses. But this response slays a strawman. The issue is not whether economists can predict future crises, but whether they can identify conditions indicating an increased likelihood of a crisis and suggest precautionary measures to reduce the likelihood that a potential crisis will occur. But Graeber seems uninterested in or incapable of engaging the question at even this moderate level of subtlety.)

In general, I doubt that economists can make more than a modest contribution to improved policy-making, and the best that one can hope for is probably that they steer us away from the worst potential decisions rather than identifying the best ones. But no one, as far as I know, has yet been burned at the stake by a tribunal of economists.

To this day, economics continues to be taught not as a story of arguments—not, like any other social science, as a welter of often warring theoretical perspectives—but rather as something more like physics, the gradual realization of universal, unimpeachable mathematical truths. “Heterodox” theories of economics do, of course, exist (institutionalist, Marxist, feminist, “Austrian,” post-Keynesian…), but their exponents have been almost completely locked out of what are considered “serious” departments, and even outright rebellions by economics students (from the post-autistic economics movement in France to post-crash economics in Britain) have largely failed to force them into the core curriculum.

I am now happy to register agreement with something that Graeber says. Economists in general have become overly attached to axiomatic and formalistic mathematical models that create a false and misleading impression of rigor and mathematical certainty. In saying this, I don’t dispute that mathematical modeling is an important part of much economic theorizing, but it should not exclude other approaches to economic analysis and discourse.

As a result, heterodox economists continue to be treated as just a step or two away from crackpots, despite the fact that they often have a much better record of predicting real-world economic events. What’s more, the basic psychological assumptions on which mainstream (neoclassical) economics is based—though they have long since been disproved by actual psychologists—have colonized the rest of the academy, and have had a profound impact on popular understandings of the world.

That heterodox economists have a better record of predicting economic events than mainstream economists is an assertion for which Graeber offers no evidence or examples. I would not be surprised if he could cite examples, but one would have to weigh the evidence surrounding those examples before concluding that predictions by heterodox economists were more accurate than those of their mainstream counterparts.

Graeber returns to the topic of monetary theory, which seems a particular bugaboo of his. Taking the extreme liberty of holding up Mrs. Theresa May as a spokesperson for orthodox economics, he focuses on her definitive 2017 statement that there is no magic money tree.

The truly extraordinary thing about May’s phrase is that it isn’t true. There are plenty of magic money trees in Britain, as there are in any developed economy. They are called “banks.” Since modern money is simply credit, banks can and do create money literally out of nothing, simply by making loans. Almost all of the money circulating in Britain at the moment is bank-created in this way.

What Graeber chooses to ignore is that banks do not operate magically; they make loans and create deposits in seeking to earn profits; their decisions are not magical, but are oriented toward making profits. Whether they make good or bad decisions is debatable, but the debate isn’t about a magical process; it’s a debate about theory and evidence. Graeber describe how he thinks that economists think about how banks create money, correctly observing that there is a debate about how that process works, but without understanding those differences or their significance.

Economists, for obvious reasons, can’t be completely oblivious to the role of banks, but they have spent much of the twentieth century arguing about what actually happens when someone applies for a loan. One school insists that banks transfer existing funds from their reserves, another that they produce new money, but only on the basis of a multiplier effect). . . Only a minority—mostly heterodox economists, post-Keynesians, and modern money theorists—uphold what is called the “credit creation theory of banking”: that bankers simply wave a magic wand and make the money appear, secure in the confidence that even if they hand a client a credit for $1 million, ultimately the recipient will put it back in the bank again, so that, across the system as a whole, credits and debts will cancel out. Rather than loans being based in deposits, in this view, deposits themselves were the result of loans.

The one thing it never seemed to occur to anyone to do was to get a job at a bank, and find out what actually happens when someone asks to borrow money. In 2014 a German economist named Richard Werner did exactly that, and discovered that, in fact, loan officers do not check their existing funds, reserves, or anything else. They simply create money out of thin air, or, as he preferred to put it, “fairy dust.”

Graeber is right that economists differ in how they understand banking. But the simple transfer-of-funds view, a product of the eighteenth century, was gradually rejected over the course of the nineteenth century; the money-multiplier view largely superseded it, enjoying a half-century or more of dominance as a theory of banking, still remains a popular way for introductory textbooks to explain how banking works, though it would be better if it were decently buried and forgotten. But since James Tobin’s classic essay “Commercial banks as creators of money” was published in 1963, most economists who have thought carefully about banking have concluded that the amount of deposits created by banks corresponds to the quantity of deposits that the public, given their expectations about the future course of the economy and the future course of prices, chooses to hold. The important point is that while a bank can create deposits without incurring more than the negligible cost of making a book-keeping, or an electronic, entry in a customer’s account, the creation of a deposit is typically associated with a demand by the bank to hold either reserves in its account with the Fed or to hold some amount of Treasury instruments convertible, on very short notice, into reserves at the Fed.

Graeber seems to think that there is something fundamental at stake for the whole of macroeconomics in the question whether deposits created loans or loans create deposits. I agree that it’s an important question, but not as significant as Graeber believes. But aside from that nuance, what’s remarkable is that Graeber actually acknowledges that the weight of professional opinion is on the side that says that loans create deposits. He thus triumphantly cites a report by Bank of England economists that correctly explained that banks create money and do so in the normal course of business by making loans.

Before long, the Bank of England . . . rolled out an elaborate official report called “Money Creation in the Modern Economy,” replete with videos and animations, making the same point: existing economics textbooks, and particularly the reigning monetarist orthodoxy, are wrong. The heterodox economists are right. Private banks create money. Central banks like the Bank of England create money as well, but monetarists are entirely wrong to insist that their proper function is to control the money supply.

Graeber, I regret to say, is simply exposing the inadequacy of his knowledge of the history of economics. Adam Smith in The Wealth of Nations explained that banks create money who, in doing so, saved the resources that would have been wasted on creating additional gold and silver. Subsequent economists from David Ricardo through Henry Thornton, J. S. Mill and R. G. Hawtrey were perfectly aware that banks can supply money — either banknotes or deposits — at less than the cost of mining and minting new coins, as they extend their credit in making loans to borrowers. So what is at issue, Graeber to the contrary notwithstanding, is not a dispute between orthodoxy and heterodoxy.

In fact, central banks do not in any sense control the money supply; their main function is to set the interest rate—to determine how much private banks can charge for the money they create.

Central banks set a rental price for reserves, thereby controlling the quantity of reserves into which bank deposits are convertible that is available to the economy. One way to think about that quantity is that the quantity of reserves along with the aggregate demand to hold reserves determines the exchange value of reserves and hence the price level; another way to think about it is that the interest rate or the implied policy stance of the central bank helps to determine the expectations of the public about the future course of the price level which is what determines – within some margin of error or range – what the future course of the price level will turn out to be.

Almost all public debate on these subjects is therefore based on false premises. For example, if what the Bank of England was saying were true, government borrowing didn’t divert funds from the private sector; it created entirely new money that had not existed before.

This is just silly. Funds may or may not be diverted from the private sector, but the total available resources to society is finite. If the central bank creates additional money, it creates additional claims to those resources and the creation of additional claims to resources necessarily has an effect on the prices of inputs and of outputs.

One might have imagined that such an admission would create something of a splash, and in certain restricted circles, it did. Central banks in Norway, Switzerland, and Germany quickly put out similar papers. Back in the UK, the immediate media response was simply silence. The Bank of England report has never, to my knowledge, been so much as mentioned on the BBC or any other TV news outlet. Newspaper columnists continued to write as if monetarism was self-evidently correct. Politicians continued to be grilled about where they would find the cash for social programs. It was as if a kind of entente cordiale had been established, in which the technocrats would be allowed to live in one theoretical universe, while politicians and news commentators would continue to exist in an entirely different one.

Even if we stipulate that this characterization of what the BBC and newspaper columnists believe is correct, what we would have — at best — is a commentary on the ability of economists to communicate their understanding of how the economy works to the intelligentsia that communicates to ordinary citizens. It is not in and of itself a commentary on the state of economic knowledge, inasmuch as Graeber himself concedes that most economists don’t accept monetarism. And that has been the case, as Noah Smith pointed out in his Bloomberg column on Graeber, since the early 1980s when the Monetarist experiment in trying to conduct monetary policy by controlling the monetary aggregates proved entirely unworkable and had to be abandoned as it was on the verge of precipitating a financial crisis.

Only after this long warmup decrying the sorry state of contemporary economic theory does Graeber begin discussing the book under review Money and Government by Robert Skidelsky.

What [Skidelsky] reveals is an endless war between two broad theoretical perspectives. . . The crux of the argument always seems to turn on the nature of money. Is money best conceived of as a physical commodity, a precious substance used to facilitate exchange, or is it better to see money primarily as a credit, a bookkeeping method or circulating IOU—in any case, a social arrangement? This is an argument that has been going on in some form for thousands of years. What we call “money” is always a mixture of both, and, as I myself noted in Debt (2011), the center of gravity between the two tends to shift back and forth over time. . . .One important theoretical innovation that these new bullion-based theories of money allowed was, as Skidelsky notes, what has come to be called the quantity theory of money (usually referred to in textbooks—since economists take endless delight in abbreviations—as QTM).

But these two perspectives are not mutually exclusive, and, depending on time, place, circumstances, and the particular problem that is the focus of attention, either of the two may be the appropriate paradigm for analysis.

The QTM argument was first put forward by a French lawyer named Jean Bodin, during a debate over the cause of the sharp, destablizing price inflation that immediately followed the Iberian conquest of the Americas. Bodin argued that the inflation was a simple matter of supply and demand: the enormous influx of gold and silver from the Spanish colonies was cheapening the value of money in Europe. The basic principle would no doubt have seemed a matter of common sense to anyone with experience of commerce at the time, but it turns out to have been based on a series of false assumptions. For one thing, most of the gold and silver extracted from Mexico and Peru did not end up in Europe at all, and certainly wasn’t coined into money. Most of it was transported directly to China and India (to buy spices, silks, calicoes, and other “oriental luxuries”), and insofar as it had inflationary effects back home, it was on the basis of speculative bonds of one sort or another. This almost always turns out to be true when QTM is applied: it seems self-evident, but only if you leave most of the critical factors out.

In the case of the sixteenth-century price inflation, for instance, once one takes account of credit, hoarding, and speculation—not to mention increased rates of economic activity, investment in new technology, and wage levels (which, in turn, have a lot to do with the relative power of workers and employers, creditors and debtors)—it becomes impossible to say for certain which is the deciding factor: whether the money supply drives prices, or prices drive the money supply.

As a matter of logic, if the value of money depends on the precious metals (gold or silver) from which coins were minted, the value of money is necessarily affected by a change in the value of the metals used to coin money. Because a large increase in the stock of gold and silver, as Graeber concedes, must reduce the value of those metals, subsequent inflation then being attributable, at least in part, to the gold and silver discoveries even if the newly mined gold and silver was shipped mainly to privately held Indian and Chinese hoards rather than minted into new coins. An exogenous increase in prices may well have caused the quantity of credit money to increase, but that is analytically distinct from the inflationary effect of a reduced value of gold or silver when, as was the case in the sixteenth century, money is legally defined as a specific weight of gold or silver.

Technically, this comes down to a choice between what are called exogenous and endogenous theories of money. Should money be treated as an outside factor, like all those Spanish dubloons supposedly sweeping into Antwerp, Dublin, and Genoa in the days of Philip II, or should it be imagined primarily as a product of economic activity itself, mined, minted, and put into circulation, or more often, created as credit instruments such as loans, in order to meet a demand—which would, of course, mean that the roots of inflation lie elsewhere?

There is no such choice, because any theory must posit certain initial conditions and definitions, which are given or exogenous to the analysis. How the theory is framed and which variables are treated as exogenous and which are treated as endogenous is a matter of judgment in light of the problem and the circumstances. Graeber is certainly correct that, in any realistic model, the quantity of money is endogenously, not exogenously, determined, but that doesn’t mean that the value of gold and silver may not usefully be treated as exogenous in a system in which money is defined as a weight of gold or silver.

To put it bluntly: QTM is obviously wrong. Doubling the amount of gold in a country will have no effect on the price of cheese if you give all the gold to rich people and they just bury it in their yards, or use it to make gold-plated submarines (this is, incidentally, why quantitative easing, the strategy of buying long-term government bonds to put money into circulation, did not work either). What actually matters is spending.

Graeber is talking in circles, failing to distinguish between the quantity theory of money – a theory about the value of a pure medium of exchange with no use except to be received in exchange — and a theory of the real value of gold and silver when money is defined as a weight of gold or silver. The value of gold (or silver) in monetary uses must be roughly equal to its value in non-monetary uses. which is determined by the total stock of gold and the demand to hold gold or to use it in coinage or for other uses (e.g., jewelry and ornamentation). An increase in the stock of gold relative to demand must reduce its value. That relationship between price and quantity is not the same as QTM. The quantity of a metallic money will increase as its value in non-monetary uses declines. If there is literally an unlimited demand for newly mined gold to be immediately sent unused into hoards, Graeber’s argument would be correct. But the fact that much of the newly mined gold initially went into hoards does not mean that all of the newly mined gold went into hoards.

In sum, Graeber is confused between the quantity theory of money and a theory of a commodity money used both as money and as a real commodity. The quantity theory of money of a pure medium of exchange posits that changes in the quantity of money cause proportionate changes in the price level. Changes in the quantity of a real commodity also used as money have nothing to do with the quantity theory of money.

Relying on a dubious account of the history of monetary theory by Skidelsky, Graeber blames the obsession of economists with the quantity theory for repeated monetary disturbances starting with the late 17th century deflation in Britain when silver appreciated relative to gold causing prices measured in silver to fall. Graeber thus fails to see that under a metallic money, real disturbances do have repercussion on the level of prices, repercussions having nothing to do with an exogenous prior change in the quantity of money.

According to Skidelsky, the pattern was to repeat itself again and again, in 1797, the 1840s, the 1890s, and, ultimately, the late 1970s and early 1980s, with Thatcher and Reagan’s (in each case brief) adoption of monetarism. Always we see the same sequence of events:

(1) The government adopts hard-money policies as a matter of principle.

(2) Disaster ensues.

(3) The government quietly abandons hard-money policies.

(4) The economy recovers.

(5) Hard-money philosophy nonetheless becomes, or is reinforced as, simple universal common sense.

There is so much indiscriminate generalization here that it is hard to know what to make of it. But the conduct of monetary policy has always been fraught, and learning has been slow and painful. We can and must learn to do better, but blanket condemnations of economics are unlikely to lead to better outcomes.

How was it possible to justify such a remarkable string of failures? Here a lot of the blame, according to Skidelsky, can be laid at the feet of the Scottish philosopher David Hume. An early advocate of QTM, Hume was also the first to introduce the notion that short-term shocks—such as Locke produced—would create long-term benefits if they had the effect of unleashing the self-regulating powers of the market:

Actually I agree that Hume, as great and insightful a philosopher as he was and as sophisticated an economic observer as he was, was an unreliable monetary theorist. And one of the reasons he was led astray was his unwarranted attachment to the quantity theory of money, an attachment that was not shared by his close friend Adam Smith.

Ever since Hume, economists have distinguished between the short-run and the long-run effects of economic change, including the effects of policy interventions. The distinction has served to protect the theory of equilibrium, by enabling it to be stated in a form which took some account of reality. In economics, the short-run now typically stands for the period during which a market (or an economy of markets) temporarily deviates from its long-term equilibrium position under the impact of some “shock,” like a pendulum temporarily dislodged from a position of rest. This way of thinking suggests that governments should leave it to markets to discover their natural equilibrium positions. Government interventions to “correct” deviations will only add extra layers of delusion to the original one.

I also agree that focusing on long-run equilibrium without regard to short-run fluctuations can lead to terrible macroeconomic outcomes, but that doesn’t mean that long-run effects are never of concern and may be safely disregarded. But just as current suffering must not be disregarded when pursuing vague and uncertain long-term benefits, ephemeral transitory benefits shouldn’t obscure serious long-term consequences. Weighing such alternatives isn’t easy, but nothing is gained by denying that the alternatives exist. Making those difficult choices is inherent in policy-making, whether macroeconomic or climate policy-making.

Although Graeber takes a valid point – that a supposed tendency toward an optimal long-run equilibrium does not justify disregard of an acute short-term problem – to an extreme, his criticism of the New Classical approach to policy-making that replaced the flawed mainstream Keynesian macroeconomics of the late 1970s is worth listening to. The New Classical approach self-consciously rejected any policy aimed at short-run considerations owing to a time-inconsistency paradox was based almost entirely on the logic of general-equilibrium theory and an illegitimate methodological argument rejecting all macroeconomic theories not rigorously deduced from the unarguable axiom of optimizing behavior by rational agents (and therefore not, in the official jargon, microfounded) as unscientific and unworthy of serious consideration in the brave New Classical world of scientific macroeconomics.

It’s difficult for outsiders to see what was really at stake here, because the argument has come to be recounted as a technical dispute between the roles of micro- and macroeconomics. Keynesians insisted that the former is appropriate to studying the behavior of individual households or firms, trying to optimize their advantage in the marketplace, but that as soon as one begins to look at national economies, one is moving to an entirely different level of complexity, where different sorts of laws apply. Just as it is impossible to understand the mating habits of an aardvark by analyzing all the chemical reactions in their cells, so patterns of trade, investment, or the fluctuations of interest or employment rates were not simply the aggregate of all the microtransactions that seemed to make them up. The patterns had, as philosophers of science would put it, “emergent properties.” Obviously, it was necessary to understand the micro level (just as it was necessary to understand the chemicals that made up the aardvark) to have any chance of understand the macro, but that was not, in itself, enough.

As an aisde, it’s worth noting that the denial or disregard of the possibility of any emergent properties by New Classical economists (of which what came to be known as New Keynesian economics is really a mildly schismatic offshoot) is nicely illustrated by the un-self-conscious alacrity with which the representative-agent approach was adopted as a modeling strategy in the first few generations of New Classical models. That New Classical theorists now insist that representative agency is not an essential to New Classical modeling is true, but the methodologically reductive nature of New Classical macroeconomics, in which all macroeconomic theories must be derived under the axiom of individually maximizing behavior except insofar as specific “frictions” are introduced by explicit assumption, is essential. (See here, here, and here)

The counterrevolutionaries, starting with Keynes’s old rival Friedrich Hayek . . . took aim directly at this notion that national economies are anything more than the sum of their parts. Politically, Skidelsky notes, this was due to a hostility to the very idea of statecraft (and, in a broader sense, of any collective good). National economies could indeed be reduced to the aggregate effect of millions of individual decisions, and, therefore, every element of macroeconomics had to be systematically “micro-founded.”

Hayek’s role in the microfoundations movement is important, but his position was more sophisticated and less methodologically doctrinaire than that of the New Classical macroeconomists, if for no other reason than that Hayek didn’t believe that macroeconomics should, or could, be derived from general-equilibrium theory. His criticism, like that of economists like Clower and Leijonhufvud, of Keynesian macroeconomics for being insufficiently grounded in microeconomic principles, was aimed at finding microeconomic arguments that could explain and embellish and modify the propositions of Keynesian macroeconomic theory. That is the sort of scientific – not methodological — reductivism that Hayek’s friend Karl Popper advocated: a theoretical and empirical challenge of reducing a higher level theory to its more fundamental foundations, e.g., when physicists and chemists search for theoretical breakthroughs that allow the propositions of chemistry to be reduced to more fundamental propositions of physics. The attempt to reduce chemistry to underlying physical principles is very different from a methodological rejection of all chemistry that cannot be derived from underlying deep physical theories.

There is probably more than a grain of truth in Graeber’s belief that there was a political and ideological subtext in the demand for microfoundations by New Classical macroeconomists, but the success of the microfoundations program was also the result of philosophically unsophisticated methodological error. How to apportion the share of blame going to mistaken methodology, professional and academic opportunism, and a hidden political agenda is a question worthy of further investigation. The easy part is to identify the mistaken methodology, which Graeber does. As for the rest, Graeber simply asserts bad faith, but with little evidence.

In Graeber’s comprehensive condemnation of modern economics, the efficient market hypothesis, being closely related to the rational-expectations hypothesis so central to New Classical economics, is not spared either. Here again, though I share and sympathize with his disdain for EMH, Graeber can’t resist exaggeration.

In other words, we were obliged to pretend that markets could not, by definition, be wrong—if in the 1980s the land on which the Imperial compound in Tokyo was built, for example, was valued higher than that of all the land in New York City, then that would have to be because that was what it was actually worth. If there are deviations, they are purely random, “stochastic” and therefore unpredictable, temporary, and, ultimately, insignificant.

Of course, no one is obliged to pretend that markets could not be wrong — and certainly not by a definition. The EMH simply asserts that the price of an asset reflects all the publicly available information. But what EMH asserts is certainly not true in many or even most cases, because people with non-public information (or with superior capacity to process public information) may affect asset prices, and such people may profit at the expense of those less knowledgeable or less competent in anticipating price changes. Moreover, those advantages may result from (largely wasted) resources devoted to acquiring and processing information, and it is those people who make fortunes betting on the future course of asset prices.

Graeber then quotes Skidelsky approvingly:

There is a paradox here. On the one hand, the theory says that there is no point in trying to profit from speculation, because shares are always correctly priced and their movements cannot be predicted. But on the other hand, if investors did not try to profit, the market would not be efficient because there would be no self-correcting mechanism. . .

Secondly, if shares are always correctly priced, bubbles and crises cannot be generated by the market….

This attitude leached into policy: “government officials, starting with [Fed Chairman] Alan Greenspan, were unwilling to burst the bubble precisely because they were unwilling to even judge that it was a bubble.” The EMH made the identification of bubbles impossible because it ruled them out a priori.

So the apparent paradox that concerns Skidelsky and Graeber dissolves upon (only a modest amount of) further reflection. Proper understanding and revision of the EMH makes it clear that bubbles can occur. But that doesn’t mean that bursting bubbles is a job that can be safely delegated to any agency, including the Fed.

Moreover, the housing bubble peaked in early 2006, two and a half years before the financial crisis in September 2008. The financial crisis was not unrelated to the housing bubble, which undoubtedly added to the fragility of the financial system and its vulnerability to macroeconomic shocks, but the main cause of the crisis was Fed policy that was unnecessarily focused on a temporary blip in commodity prices persuading the Fed not to loosen policy in 2008 during a worsening recession. That was a scenario similar to the one in 1929 when concern about an apparent stock-market bubble caused the Fed to repeatedly tighten money, raising interest rates, thereby causing a downturn and crash of asset prices triggering the Great Depression.

Graeber and Skidelsky correctly identify some of the problems besetting macroeconomics, but their indiscriminate attack on all economic theory is unlikely to improve the situation. A pity, because a focused and sophisticated critique of economics than they have served up has never been more urgently needed than it is now to enable economists to perform the modest service to mankind of which they might be capable.

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com