Archive for the 'Thomas Sargent' Category

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Three Propagation Mechanisms in Lucas and Sargent with a Response from Brad DeLong

UPDATE (4/3/2022): Reupping this post with the response to my query sent by Brad DeLong.

I’m writing this post in hopes of eliciting some guidance from readers about the three propagation mechanisms to which Robert Lucas and Thomas Sargent refer in their famous 1978 article, “After Keynesian Macroeconomics.” The three propagation mechanisms were mentioned to parry criticisms of the rational-expectations principle underlying the New Classical macroeconomics that Lucas and Sargent were then developing as an alternative to Keynesian macroeconomics. I am wondering how subsequent research has dealt with these propagation mechanisms and how they are now treated in current macro-theory. Here is the relevant passage from Lucas and Sargent:

A second line of criticism stems from the correct observation that if agents’ expectations are rational and if their information sets include lagged values of the variable being forecast, then agents’ forecast errors must be a serially uncorrelated random process. That is, on average there must be no detectable relationships between a period’s forecast error and any previous period’s. This feature has led several critics to conclude that equilibrium models cannot account for more than an insignificant part of the highly serially correlated movements we observe in real output, employment, unemployment, and other series. Tobin (1977, p. 461) has put the argument succinctly:

One currently popular explanation of variations in employment is temporary confusion of relative and absolute prices. Employers and workers are fooled into too many jobs by unexpected inflation, but only until they learn it affects other prices, not just the prices of what they sell. The reverse happens temporarily when inflation falls short of expectation. This model can scarcely explain more than transient disequilibrium in labor markets.

So how can the faithful explain the slow cycles of unemployment we actually observe? Only by arguing that the natural rate itself fluctuates, that variations in unemployment rates are substantially changes in voluntary, frictional, or structural unemployment rather than in involuntary joblessness due to generally deficient demand.

The critics typically conclude that the theory only attributes a very minor role to aggregate demand fluctuations and necessarily depends on disturbances to aggregate supply to account for most of the fluctuations in real output over the business cycle. “In other words,” as Modigliani (1977) has said, “what happened to the United States in the 1930’s was a severe attack of contagious laziness.” This criticism is fallacious because it fails to distinguish properly between sources of impulses and propagation mechanisms, a distinction stressed by Ragnar Frisch in a classic 1933 paper that provided many of the technical foundations for Keynesian macroeconometric models. Even though the new classical theory implies that the forecast errors which are the aggregate demand impulses are serially uncorrelated, it is certainly logically possible that propagation mechanisms are at work that convert these impulses into serially correlated movements in real variables like output and employment. Indeed, detailed theoretical work has already shown that two concrete propagation mechanisms do precisely that.

One mechanism stems from the presence of costs to firms of adjusting their stocks of capital and labor rapidly. The presence of these costs is known to make it optimal for firms to spread out over time their response to the relative price signals they receive. That is, such a mechanism causes a firm to convert the serially uncorrelated forecast errors in predicting relative prices into serially correlated movements in factor demands and output.

A second propagation mechanism is already present in the most classical of economic growth models. Households’ optimal accumulation plans for claims on physical capital and other assets convert serially uncorrelated impulses into serially correlated demands for the accumulation of real assets. This happens because agents typically want to divide any unexpected changes in income partly between consuming and accumulating assets. Thus, the demand for assets next period depends on initial stocks and on unexpected changes in the prices or income facing agents. This dependence makes serially uncorrelated surprises lead to serially correlated movements in demands for physical assets. Lucas (1975) showed how this propagation mechanism readily accepts errors in forecasting aggregate demand as an impulse source.

A third likely propagation mechanism has been identified by recent work in search theory. (See, for example, McCall 1965, Mortensen 1970, and Lucas and Prescott 1974.) Search theory tries to explain why workers who for some reason are without jobs find it rational not necessarily to take the first job offer that comes along but instead to remain unemployed for awhile until a better offer materializes. Similarly, the theory explains why a firm may find it optimal to wait until a more suitable job applicant appears so that vacancies persist for some time. Mainly for technical reasons, consistent theoretical models that permit this propagation mechanism to accept errors in forecasting aggregate demand as an impulse have not yet been worked out, but the mechanism seems likely eventually to play an important role in a successful model of the time series behavior of the unemployment rate. In models where agents have imperfect information, either of the first two mechanisms and probably the third can make serially correlated movements in real variables stem from the introduction of a serially uncorrelated sequence of forecasting errors. Thus theoretical and econometric models have been constructed in which in principle the serially uncorrelated process of forecasting errors can account for any proportion between zero and one of the steady state variance of real output or employment. The argument that such models must necessarily attribute most of the variance in real output and employment to variations in aggregate supply is simply wrong logically.

My problem with the Lucas-Sargent argument is that even if the deviations from a long-run equilibrium path are serially correlated, shouldn’t those deviations be diminishing over time after the initial disturbance. Can these propagation mechanisms account for amplification of the initial disturbance before the adjustment toward the equilibrium path begins? I would gratefully welcome any responses.

David Glasner has a question about the “rational expectations” business-cycle theories developed in the 1970s:

David GlasnerThree Propagation Mechanisms in Lucas & Sargent: ‘I’m… hop[ing for]… some guidance… about… propagation mechanisms… [in] Robert Lucas and Thomas Sargent[‘s]… “After Keynesian Macroeconomics.”… 

The critics typically conclude that the theory only attributes a very minor role to aggregate demand fluctuations and necessarily depends on disturbances to aggregate supply…. [But] even though the new classical theory implies that the forecast errors which are the aggregate demand impulses are serially uncorrelated, it is certainly logically possible that propagation mechanisms are at work that convert these impulses into serially correlated movements in real variables like output and employment… the presence of costs to firms of adjusting their stocks of capital and labor rapidly…. accumulation plans for claims on physical capital and other assets convert serially uncorrelated impulses into serially correlated demands for the accumulation of real assets… workers who for some reason are without jobs find it rational not necessarily to take the first job offer that comes along but instead to remain unemployed for awhile until a better offer materializes…. In principle the serially uncorrelated process of forecasting errors can account for any proportion between zero and one of the [serially correlated] steady state variance of real output or employment. The argument that such models must necessarily attribute most of the variance in real output and employment to variations in aggregate supply is simply wrong logically…

My problem with the Lucas-Sargent argument is that even if the deviations from a long-run equilibrium path are serially correlated, shouldn’t those deviations be diminishing over time after the initial disturbance? Can these propagation mechanisms account for amplification of the initial disturbance before the adjustment toward the equilibrium path begins? I would gratefully welcome any responses…

In some ways this is of only history-of-thought interest. For Lucas and Prescott, at least, had within five years of the writing of “After Keynesian Macroeconomics” decided that the critics were right: that their models of how mistaken decisions driven by serially-uncorrelated forecast errors could not account for the bulk of the serially correlated business-cycle variance of real output and employment, and they needed to shift to studying real business cycle theory instead of price-misperceptions theory. The first problem was that time-series methods generated shocks that came at the wrong times to explain recessions. The second problem was that the propagation mechanisms did not amplify but rather damped the shock: at best they produced some kind of partial-adjustment process that extended the impact of a shock on real variables to N periods and diminished its impact in any single period to 1/N. There was no… what is the word?…. multiplier in the system.

It was stunning to watch in real time in the early 1980s. As Paul Volcker hit the economy on the head with the monetary-stringency brick, repeatedly, quarter after quarter; as his serially correlated and hence easily anticipated policy moves had large and highly serially correlated effects on output; Robert Lucas and company simply… pretended it was not happening: that monetary policy was not having major effects on output and employment in the first half of the 1980s, and that it was not the case thjat the monetary policies that were having such profound real impacts had no plausible interpretation as “surprises” leading to “misperceptions”. Meanwhile, over in the other corner, Robert Barro was claiming that he saw no break in the standard pattern of federal deficits from the Reagan administration’s combination of tax cuts plus defense buildup.

Those of us who were graduate students at the time watched this, and drew conclusions about the likelihood that Lucas, Prescott, and company had good enough judgment and close enough contact with reality that their proposed “real business cycle” research program would be a productive one—conclusions that, I think, time has proved fully correct.

Behind all this, of course, was this issue: the “microfoundations” of the Lucas “island economy” model were totally stupid: people are supposed to “misperceive” relative prices because they know the nominal prices at which they sell but do not know the nominal prices at which they buy, hence people confuse a monetary shock-generated rise in the nominal price level with an increase in the real price of what they produce, and hence work harder and longer and produce more? (I forget who it was who said at the time that the model seemed to require a family in which the husband worked and the wife went to the grocery store and the husband never listened to anything the wife said.) These so-called “microfoundations” could only be rationally understood as some kind of metaphor. But what kind of metaphor? And why should it have any special status, and claim on our attention?

Paul Krugman’s judgment on the consequences of this intellectual turn is even harsher than mine:

What made the Dark Ages dark was the fact that so much knowledge had been lost, that so much known to the Greeks and Romans had been forgotten by the barbarian kingdoms that followed. And that’s what seems to have happened to macroeconomics in much of the economics profession. The knowledge that S=I doesn’t imply the Treasury view—the general understanding that macroeconomics is more than supply and demand plus the quantity equation — somehow got lost in much of the profession. I’m tempted to go on and say something about being overrun by barbarians in the grip of an obscurantist faith…

I would merely say that it has left us, over what is now two generations, with a turn to DSGE models—Dynamic Stochastic General Equilibrium—that must satisfy a set of formal rhetorical requirements that really do not help us fit the data, and that it gave many, many people an excuse not to read and hence a license to remain ignorant of James Tobin.

Brad

====

Preorder Slouching Towards Utopia: An Economic History of the Long 20th Century, 1870-2010

About: <https://braddelong.substack.com/about>

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Memo to Tom Sargent: Economics Is More than Just Common Sense

Paul Krugman was really not very happy with his fellow Nobel Laureate Tom Sargent this week, posting two consecutive rebuttals (here and here) to a 2006 commencement speech (five years before getting the prize) that Sargent gave at UC Berkeley.

Let’s look at what Sargent had to say. All of it.

I remember how happy I felt when I graduated from Berkeley many years ago. But I thought the graduation speeches were long. I will economize on words.

Economics is organized common sense. Here is a short list of valuable lessons that our beautiful subject teaches.

1. Many things that are desirable are not feasible.

2. Individuals and communities face trade-offs.

3. Other people have more information about their abilities, their efforts, and their preferences than you do.

4. Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always end up working as intended.

5. There are tradeoffs between equality and efficiency.

6. In an equilibrium of a game or an economy, people are satisfied with their choices. That is why it is difficult for well-meaning outsiders to change things for better or worse.

7. In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation.

8. Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made.

9. It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore).

10. When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inflation.

11. Most people want other people to pay for public goods and government transfers (especially transfers to themselves).

12. Because market prices aggregate traders’ information, it is difficult to forecast stock prices and interest rates and exchange rates.

I was mainly struck by two things about this speech:

First, the brevity of the speech is attributed to empathy toward the limited attention span of the audience, but it is hard to avoid the suspicion that Sargent was responding to the incentive to shirk the challenging responsibility of a commencement speaker to say something meaningful and memorable, instead patching together a list of truisms and platitudes interspersed with a few potentially problematic assertions, without distinguishing between the platitudinous and the problematic.

Second, the complacent tone, audible especially in the sentence: “Economics is organized common sense.” At the same time Sargent says that economics is a beautiful subject. It would be interesting to find out what it is about the organization of common sense that seems beautiful to Sargent, but let us not probe too deeply into Sargent’s thought processes. Nearly three years ago, just after starting this blog, I observed that common sense is not enough to do economics right. Things are not always what they seem to be. The earth is not really flat and the sun doesn’t really revolve around the earth. Our common sense has to be taught how to perceive reality, which means that we have to think more carefully about the world than just accepting what common sense tells us must be so.

That’s why reading Sargent, I couldn’t help but think of what John Stuart Mill, who very likely had an IQ even higher than Tom Sargent, said 165 years ago in his great treatise Principles of Political Economy.

Happily, there is nothing in the laws of Value which remains for the present or any future writer to clear up; the theory of the subject is complete.

So, in the spirit of not just taking things at face value, let me offer some brief comments on Sargent’s 12 maxims.

1 Many things that are desirable are not feasible. Comment: And the feasibility of many of those things that are desirable is uncertain. In fact, mention of uncertainty — a rather important feature of reality, or so it would seem to my common sense  — is conspicuous by its absence.

2 Individuals and communities face tradeoffs. Comment: Tradeoffs don’t necessarily exist in situations when individuals or communities are not optimizing. Even though every individual is optimizing, the community may not be.

3 Other individuals have more information about their abilities, their efforts, and their preferences than you do. Nitpicky Comment: Very badly written. Evidently he means that other individuals have more information about themselves than you have about them, but it could be interpreted to mean that they have more information about themselves than you have about yourself, some people being more self-aware than others.

4 Everyone responds to incentives, including people you want to help. That is why social safety nets don’t always working as intended. Comment: None, but see 11 below.

5 There are tradeoffs between equality and efficiency. Comment: This is so vague and so simplistic as to be useless.

6 In an equilibrium of a game or an economy, people are satisfied with their choices. That is why it is difficult for well-meaning outsiders to change things for better or worse. Comment: What kind of equilibrium are we talking about? Not every equilibrium is a social optimum. How do we know that equilibrium is an appropriate way of analyzing a social state? In an equilibrium, can there be surprises? Regrets? If we observe people being surprised and being regretful, does that mean they are deluded or misinterpreting their feelings? What is the common sense understanding that one should attach to such frequently observed states of mind?

7. In the future, you too will respond to incentives. That is why there are some promises that you’d like to make but can’t. No one will believe those promises because they know that later it will not be in your interest to deliver. The lesson here is this: before you make a promise, think about whether you will want to keep it if and when your circumstances change. This is how you earn a reputation. Comment: None.

8. Governments and voters respond to incentives too. That is why governments sometimes default on loans and other promises that they have made. Comment: None.

9. It is feasible for one generation to shift costs to subsequent ones. That is what national government debts and the U.S. social security system do (but not the social security system of Singapore). Comment: The circumstances under which generational shifts occur and the magnitude of those shifts are not so clear. Also, if the growth of knowledge and productivity, which are not necessarily tied to the amount of current saving, is likely to make future generations substantially better off than the current generation, it is not obvious that imposing a debt burden on future generations is an unjust choice.

10. When a government spends, its citizens eventually pay, either today or tomorrow, either through explicit taxes or implicit ones like inflation. Comment: Depends on what governments spend on.

11. Most people want other people to pay for public goods and government transfers (especially transfers to themselves). Comment: Why most? Who does want to pay for public goods and who doesn’t want to receive transfers? I thought that everyone responds to incentives. See 4 above.

12. Because market prices aggregate traders’ information, it is difficult to forecast stock prices and interest rates and exchange rates. Comment: None.

 

 

 

 


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com