Archive for the 'tatonnement' Category

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

The Rises and Falls of Keynesianism and Monetarism

The following is extracted from a paper on the history of macroeconomics that I’m now writing. I don’t know yet where or when it will be published and there may or may not be further installments, but I would be interested in any comments or suggestions that readers might have. Regular readers, if there are any, will probably recognize some familiar themes that I’ve been writing about in a number of my posts over the past several months. So despite the diminished frequency of my posting, I haven’t been entirely idle.

Recognizing the cognitive dissonance between the vision of the optimal equilibrium of a competitive market economy described by Marshallian economic theory and the massive unemployment of the Great Depression, Keynes offered an alternative, and, in his view, more general, theory, the optimal neoclassical equilibrium being a special case.[1] The explanatory barrier that Keynes struggled, not quite successfully, to overcome in the dire circumstances of the 1930s, was why market-price adjustments do not have the equilibrating tendencies attributed to them by Marshallian theory. The power of Keynes’s analysis, enhanced by his rhetorical gifts, enabled him to persuade much of the economics profession, especially many of the most gifted younger economists at the time, that he was right. But his argument, failing to expose the key weakness in the neoclassical orthodoxy, was incomplete.

The full title of Keynes’s book, The General Theory of Employment, Interest and Money identifies the key elements of his revision of neoclassical theory. First, contrary to a simplistic application of Marshallian theory, the mass unemployment of the Great Depression would not be substantially reduced by cutting wages to “clear” the labor market. The reason, according to Keynes, is that the levels of output and unemployment depend not on money wages, but on planned total spending (aggregate demand). Mass unemployment is the result of too little spending not excessive wages. Reducing wages would simply cause a corresponding decline in total spending, without increasing output or employment.

If wage cuts do not increase output and employment, the ensuing high unemployment, Keynes argued, is involuntary, not the outcome of optimizing choices made by workers and employers. Ever since, the notion that unemployment can be involuntary has remained a contested issue between Keynesians and neoclassicists, a contest requiring resolution in favor of one or the other theory or some reconciliation of the two.

Besides rejecting the neoclassical theory of employment, Keynes also famously disputed the neoclassical theory of interest by arguing that the rate of interest is not, as in the neoclassical theory, a reward for saving, but a reward for sacrificing liquidity. In Keynes’s view, rather than equilibrate savings and investment, interest equilibrates the demand to hold the money issued by the monetary authority with the amount issued by the monetary authority. Under the neoclassical theory, it is the price level that adjusts to equilibrate the demand for money with the quantity issued.

Had Keynes been more attuned to the Walrasian paradigm, he might have recast his argument that cutting wages would not eliminate unemployment by noting the inapplicability of a Marshallian supply-demand analysis of the labor market (accounting for over 50 percent of national income), because wage cuts would shift demand and supply curves in almost every other input and output market, grossly violating the ceteris-paribus assumption underlying Marshallian supply-demand paradigm. When every change in the wage shifts supply and demand curves in all markets for good and services, which in turn causes the labor-demand and labor-supply curves to shift, a supply-demand analysis of aggregate unemployment becomes a futile exercise.

Keynes’s work had two immediate effects on economics and economists. First, it immediately opened up a new field of research – macroeconomics – based on his theory that total output and employment are determined by aggregate demand. Representing only one element of Keynes’s argument, the simplified Keynesian model, on which macroeconomic theory was founded, seemed disconnected from either the Marshallian or Walrasian versions of neoclassical theory.

Second, the apparent disconnect between the simple Keynesian macro-model and neoclassical theory provoked an ongoing debate about the extent to which Keynesian theory could be deduced, or even reconciled, with the premises of neoclassical theory. Initial steps toward a reconciliation were provided when a model incorporating the quantity of money and the interest rate into the Keynesian analysis was introduced, soon becoming the canonical macroeconomic model of undergraduate and graduate textbooks.

Critics of Keynesian theory, usually those opposed to its support for deficit spending as a tool of aggregate demand management, its supposed inflationary bias, and its encouragement or toleration of government intervention in the free-market economy, tried to debunk Keynesianism by pointing out its inconsistencies with the neoclassical doctrine of a self-regulating market economy. But proponents of Keynesian precepts were also trying to reconcile Keynesian analysis with neoclassical theory. Future Nobel Prize winners like J. R. Hicks, J. E. Meade, Paul Samuelson, Franco Modigliani, James Tobin, and Lawrence Klein all derived various Keynesian propositions from neoclassical assumptions, usually by resorting to the un-Keynesian assumption of rigid or sticky prices and wages.

What both Keynesian and neoclassical economists failed to see is that, notwithstanding the optimality of an economy with equilibrium market prices, in either the Walrasian or the Marshallian versions, cannot explain either how that set of equilibrium prices is, or can be, found, or how it results automatically from the routine operation of free markets.

The assumption made implicitly by both Keynesians and neoclassicals was that, in an ideal perfectly competitive free-market economy, prices would adjust, if not instantaneously, at least eventually, to their equilibrium, market-clearing, levels so that the economy would achieve an equilibrium state. Not all Keynesians, of course, agreed that a perfectly competitive economy would reach that outcome, even in the long-run. But, according to neoclassical theory, equilibrium is the state toward which a competitive economy is drawn.

Keynesian policy could therefore be rationalized as an instrument for reversing departures from equilibrium and ensuring that such departures are relatively small and transitory. Notwithstanding Keynes’s explicit argument that wage cuts cannot eliminate involuntary unemployment, the sticky-prices-and-wages story was too convenient not to be adopted as a rationalization of Keynesian policy while also reconciling that policy with the neoclassical orthodoxy associated with the postwar ascendancy of the Walrasian paradigm.

The Walrasian ascendancy in neoclassical theory was the culmination of a silent revolution beginning in the late 1920s when the work of Walras and his successors was taken up by a younger generation of mathematically trained economists. The revolution proceeded along many fronts, of which the most important was proving the existence of a solution of the system of equations describing a general equilibrium for a competitive economy — a proof that Walras himself had not provided. The sophisticated mathematics used to describe the relevant general-equilibrium models and derive mathematically rigorous proofs encouraged the process of rapid development, adoption and application of mathematical techniques by subsequent generations of economists.

Despite the early success of the Walrasian paradigm, Kenneth Arrow, perhaps the most important Walrasian theorist of the second half of the twentieth century, drew attention to the explanatory gap within the paradigm: how the adjustment of disequilibrium prices is possible in a model of perfect competition in which every transactor takes market price as given. The Walrasian theory shows that a competitive equilibrium ensuring the consistency of agents’ plans to buy and sell results from an equilibrium set of prices for all goods and services. But the theory is silent about how those equilibrium prices are found and communicated to the agents of the model, the Walrasian tâtonnement process being an empirically empty heuristic artifact.

In fact, the explanatory gap identified by Arrow was even wider than he had suggested or realized, for another aspect of the Walrasian revolution of the late 1920s and 1930s was the extension of the equilibrium concept from a single-period equilibrium to an intertemporal equilibrium. Although earlier works by Irving Fisher and Frank Knight laid a foundation for this extension, the explicit articulation of intertemporal-equilibrium analysis was the nearly simultaneous contribution of three young economists, two Swedes (Myrdal and Lindahl) and an Austrian (Hayek) whose significance, despite being partially incorporated into the canonical Arrow-Debreu-McKenzie version of the Walrasian model, remains insufficiently recognized.

These three economists transformed the concept of equilibrium from an unchanging static economic system at rest to a dynamic system changing from period to period. While Walras and Marshall had conceived of a single-period equilibrium with no tendency to change barring an exogenous change in underlying conditions, Myrdal, Lindahl and Hayek conceived of an equilibrium unfolding through time, defined by the mutual consistency of the optimal plans of disparate agents to buy and sell in the present and in the future.

In formulating optimal plans that extend through time, agents consider both the current prices at which they can buy and sell, and the prices at which they will (or expect to) be able to buy and sell in the future. Although it may sometimes be possible to buy or sell forward at a currently quoted price for future delivery, agents planning to buy and sell goods or services rely, for the most part, on their expectations of future prices. Those expectations, of course, need not always turn out to have been accurate.

The dynamic equilibrium described by Myrdal, Lindahl and Hayek is a contingent event in which all agents have correctly anticipated the future prices on which they have based their plans. In the event that some, if not all, agents have incorrectly anticipated future prices, those agents whose plans were based on incorrect expectations may have to revise their plans or be unable to execute them. But unless all agents share the same expectations of future prices, their expectations cannot all be correct, and some of those plans may not be realized.

The impossibility of an intertemporal equilibrium of optimal plans if agents do not share the same expectations of future prices implies that the adjustment of perfectly flexible market prices is not sufficient an optimal equilibrium to be achieved. I shall have more to say about this point below, but for now I want to note that the growing interest in the quiet Walrasian revolution in neoclassical theory that occurred almost simultaneously with the Keynesian revolution made it inevitable that Keynesian models would be recast in explicitly Walrasian terms.

What emerged from the Walrasian reformulation of Keynesian analysis was the neoclassical synthesis that became the textbook version of macroeconomics in the 1960s and 1970s. But the seemingly anomalous conjunction of both inflation and unemployment during the 1970s led to a reconsideration and widespread rejection of the Keynesian proposition that output and employment are directly related to aggregate demand.

Indeed, supporters of the Monetarist views of Milton Friedman argued that the high inflation and unemployment of the 1970s amounted to an empirical refutation of the Keynesian system. But Friedman’s political conservatism, free-market ideology, and his acerbic criticism of Keynesian policies obscured the extent to which his largely atheoretical monetary thinking was influenced by Keynesian and Marshallian concepts that rendered his version of Monetarism an unattractive alternative for younger monetary theorists, schooled in the Walrasian version of neoclassicism, who were seeking a clear theoretical contrast with the Keynesian macro model.

The brief Monetarist ascendancy following 1970s inflation conveniently collapsed in the early 1980s, after Friedman’s Monetarist policy advice for controlling the quantity of money proved unworkable, when central banks, foolishly trying to implement the advice, prolonged a needlessly deep recession while central banks consistently overshot their monetary targets, thereby provoking a long series of embarrassing warnings from Friedman about the imminent return of double-digit inflation.


[1] Hayek, both a friend and a foe of Keynes, would chide Keynes decades after Keynes’s death for calling his theory a general theory when, in Hayek’s view, it was a special theory relevant only in periods of substantially less than full employment when increasing aggregate demand could increase total output. But in making this criticism, Hayek, himself, implicitly assumed that which he had himself admitted in his theory of intertemporal equilibrium that there is no automatic equilibration mechanism that ensures that general equilibrium obtains.

The Walras-Marshall Divide in Neoclassical Theory, Part II

In my previous post, which itself followed up an earlier post “General Equilibrium, Partial Equilibrium and Costs,” I laid out the serious difficulties with neoclassical theory in either its Walrasian or Marshallian versions: its exclusive focus on equilibrium states with no plausible explanation of any economic process that leads from disequilibrium to equilibrium.

The Walrasian approach treats general equilibrium as the primary equilibrium concept, because no equilibrium solution in a single market can be isolated from the equilibrium solutions for all other markets. Marshall understood that no single market could be in isolated equilibrium independent of all other markets, but the practical difficulty of framing an analysis of the simultaneous equilibration of all markets made focusing on general equilibrium unappealing to Marshall, who wanted economic analysis to be relevant to the concerns of the public, i.e., policy makers and men of affairs whom he regarded as his primary audience.

Nevertheless, in doing partial-equilibrium analysis, Marshall conceded that it had to be embedded within a general-equilibrium context, so he was careful to specify the ceteris-paribus conditions under which partial-equilibrium analysis could be undertaken. In particular, any market under analysis had to be sufficiently small, or the disturbance to which that market was subject had to be sufficiently small, for the repercussions of the disturbance in that market to have only minimal effect on other markets, or, if substantial, those effects had to concentrated on a specific market (e.g., the market for a substitute, or complementary, good).

By focusing on equilibrium in a single market, Marshall believed he was making the analysis of equilibrium more tractable than the Walrasian alternative of focusing on the analysis of simultaneous equilibrium in all markets. Walras chose to make his approach to general equilibrium, if not tractable, at least intuitive by appealing to the fiction of tatonnement conducted by an imaginary auctioneer adjusting prices in all markets in response to any inconsistencies in the plans of transactors preventing them from executing their plans at the announced prices.

But it eventually became clear, to Walras and to others, that tatonnement could not be considered a realistic representation of actual market behavior, because the tatonnement fiction disallows trading at disequilibrium prices by pausing all transactions while a complete set of equilibrium prices for all desired transactions is sought by a process of trial and error. Not only is all economic activity and the passage of time suspended during the tatonnement process, there is not even a price-adjustment algorithm that can be relied on to find a complete set of equilibrium prices in a finite number of iterations.

Despite its seeming realism, the Marshallian approach, piecemeal market-by-market equilibration of each distinct market, is no more tenable theoretically than tatonnement, the partial-equilibrium method being premised on a ceteris-paribus assumption in which all prices and all other endogenous variables determined in markets other than the one under analysis are held constant. That assumption can be maintained only on the condition that all markets are in equilibrium. So the implicit assumption of partial-equilibrium analysis is no less theoretically extreme than Walras’s tatonnement fiction.

In my previous post, I quoted Michel De Vroey’s dismissal of Keynes’s rationale for the existence of involuntary unemployment, a violation in De Vroey’s estimation, of Marshallian partial-equilibrium premises. Let me quote De Vroey again.

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

A History of Macroeconomics from Keynes to Lucas and Beyond, pp. 22-23

My interpretation of De Vroey’s argument is that the strict Marshallian viewpoint requires that firms correctly anticipate the wages that they will have to pay in making their hiring and production decisions, while presumably also correctly anticipating the future demand for their products. I am unable to make sense of this argument unless it means that firms — and why should firm owners or managers be the only agents endowed with perfect or correct foresight? – correctly foresee the prices of the products that they sell and of the inputs that they purchase or hire. In other words, the strict Marshallian viewpoint invoked by De Vroey assumes that each transactor foresees, without the intervention of a timeless tatonnement process guided by a fictional auctioneer, the equilibrium price vector. In other words, when the strict Marshallian viewpoint is adopted, everything is simple; every transactor is a Walrasian auctioneer.

My interpretation of Keynes – and perhaps I’m just reading my own criticism of partial-equilibrium analysis into Keynes – is that he understood that the aggregate labor market can’t be analyzed in a partial-equilibrium setting, because Marshall’s ceteris-paribus proviso can’t be maintained for a market that accounts for roughly half the earnings of the economy. When conditions change in the labor market, everything else also changes. So the equilibrium conditions of the labor market must be governed by aggregate equilibrium conditions that can’t be captured in, or accounted for by, a Marshallian partial-equilibrium framework. Because something other than supply and demand in the labor market determines the equilibrium, what happens in the labor market can’t, by itself, restore an equilibrium.

That, I think, was Keynes’s intuition. But while identifying a serious defect in the Marshallian viewpoint, that intuition did not provide an adequate theory of adjustment. But the inadequacy of Keynes’s critique doesn’t rehabilitate the Marshallian viewpoint, certainly not in the form in which De Vroey represents it.

But there’s a deeper problem with the Marshallian viewpoint than just the interdependence of all markets. Although Marshall accepted marginal-utility theory in principle and used it to explain consumer demand, he tried to limit its application to demand while retaining the classical theory of the cost of production as a coordinate factor explaining the relative prices of goods and services. Marginal utility determines demand while cost determines supply, so that the interaction of supply and demand (cost and utility) jointly determine price just as the two blades of a scissor jointly cut a piece of cloth or paper.

This view of the role of cost could be maintained only in the context of the typical Marshallian partial-equilibrium exercise in which all prices — including input prices — except the price of a single output are held fixed at their general-equilibrium values. But the equilibrium prices of inputs are not determined independently of the values of the outputs they produce, so their equilibrium market values are derived exclusively from the value of whatever outputs they produce.

This was a point that Marshall, desiring to minimize the extent to which the Marginal Revolution overturned the classical theory of value, either failed to grasp, or obscured: that both prices and costs are simultaneously determined. By focusing on partial-equilibrium analysis, in which input prices are treated as exogenous variables rather than, as in general-equilibrium analysis, endogenously determined variables, Marshall was able to argue as if the classical theory that the cost incurred to produce something determines its value or its market price, had not been overturned.

The absolute dependence of input prices on the value of the outputs that they are being used to produce was grasped more clearly by Carl Menger than by Walras and certainly more clearly than by Marshall. What’s more, unlike either Walras or Marshall, Menger explicitly recognized the time lapse between the purchasing and hiring of inputs by a firm and the sale of the final output, inputs having been purchased or hired in expectation of the future sale of the output. But expected future sales are at prices anticipated, but not known, in advance, making the valuation of inputs equally conjectural and forcing producers to make commitments without knowing either their costs or their revenues before undertaking those commitments.

It is precisely this contingent relationship between the expectation of future sales at unknown, but anticipated, prices and the valuations that firms attach to the inputs they purchase or hire that provides an alternative to the problematic Marshallian and Walrasian accounts of how equilibrium market prices are actually reached.

The critical role of expected future prices in determining equilibrium prices was missing from both the Marshallian and the Walrasian theories of price determination. In the Walrasian theory, price determination was attributed to a fictional tatonnement process that Walras originally thought might serve as a kind of oversimplified and idealized version of actual market behavior. But Walras seems eventually to have recognized and acknowledged how far removed from reality his tatonnement invention actually was.

The seemingly more realistic Marshallian account of price determination avoided the unrealism of the Walrasian auctioneer, but only by attributing equally, if not more, unrealistic powers of foreknowledge to the transactors than Walras had attributed to his auctioneer. Only Menger, who realistically avoided attributing extraordinary knowledge either to transactors or to an imaginary auctioneer, instead attributing to transactors only an imperfect and fallible ability to anticipate future prices, provided a realistic account, or at least a conceptual approach toward a realistic account, of how prices are actually formed.

In a future post, I will try spell out in greater detail my version of a Mengerian account of price formation and how this account might tell us about the process by which a set of equilibrium prices might be realized.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,239 other followers
Follow Uneasy Money on WordPress.com