Archive for the 'Robert Lucas' Category

My Paper “Robert Lucas and the Pretense of Science” Is now Available on SSRN

Peter Howitt, whom I got to know slightly when he spent a year at UCLA while we were both graduate students, received an honorary doctorate from Côte d’Azur University in September. Here is a link to the press release of the University marking the award.

Peter wrote his dissertation under Robert Clower, and when Clower moved from Northwestern to UCLA in the early 1970s, Peter followed Clower as he was finishing up his dissertation. Much of Peter’s early work was devoted to trying to develop the macroeconomic ideas of Clower and Leijonhufvud. His book The Keynesian Recovery collects those important early papers which, unfortunately, did not thwart the ascendance, as Peter was writing those papers, of the ideas of Robert Lucas and his many followers, or the eventual dominance of those ideas over modern macroeconomics.

In addition to the award, a workshop on Coordination Issues in Historical Perspective was organized in Peter’s honor, and my paper, “Robert Lucas and the Pretense of Science,” which shares many of Peter’s misgivings about the current state of macroeconomics, was one of the papers presented at the workshop. In writing the paper, I drew on several posts that I have written for this blog over the years. I have continued to revise the paper since then, and the current version is now available on SSRN.

Here’s the abstract:

Hayek and Lucas were both known for their critiques of Keynesian theory on both theoretical and methodological grounds. Hayek (1934) criticized the idea that continuous monetary expansion could permanently increase total investment, foreshadowing Friedman’s (1968) argument that monetary expansion could permanently increase employment. Friedman’s analysis set the stage for Lucas’s (1976) critique of macroeconomic policy analysis, a critique that Hayek (1975) had also anticipated. Hayek’s (1942-43) advocacy of methodological individualism might also be considered an anticipation of Lucas’s methodological insistence on the necessity of rejecting Keynesian and other macroeconomic theories not based on explicit microeconomic foundations. This paper compares Hayek’s methodological individualism with Lucasian microfoundations. While Lucasian microfoundations requires all agents to make optimal choices, Hayek recognized that optimization by interdependent agents is a contingent, not a necessary, state of reconciliation and that the standard equilibrium theory on which Lucas relies does not prove that, or explain how, such a reconciliation is, or can be, achieved. The paper further argues that the Lucasian microfoundations is a form of what Popper called philosophical reductionism that is incompatible with Hayekian methodological individualism.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4260708

Robert Lucas and Real Business-Cycle Theory

In 1978 Robert Lucas and Thomas Sargent launched a famous attack on Keynes and Keynesian economics, which they viewed as having been discredited by the confluence of high inflation and high unemployment in the 1970s. They also expressed optimistism that an equilibrium approach to business-cycle modeling would succeed in replicating reasonably well the observed time-series variables relating to output and employment. In particular they posited that a model subjected to an unexpected monetary shock causing an immediate downturn from an equilibrium time path would be followed by a gradual reversion to that time path, thereby capturing the main stylized facts of historical business cycles. Their optimism was disappointed, because the model that Lucas had developed, based on an informational imperfection preventing agents from distinguishing immediately between real and nominal price changes, could not account for downturns because the informational imperfection assumed by Lucas could not account for the typical multi-period duration of business-cycle downturns.

It was this empirical anomaly in Lucas’s monetary business-cycle model that prompted Kydland and Prescott to construct their real-business cycle model. Lucas warmly welcomed their contribution, the abandonment of the monetary-theoretical motivation that Lucas had inherited from his academic training at Chicago being a small price to pay for the advancement of the larger research agenda derived from his methodological imperatives.

The real-business cycle variant of the Lucasian research program rested on two empirical pillars: (1) the identification of technology shocks with deviations, as measured by the Solow residual, from the trend rate of increase in total factor productivity, positive residuals corresponding to positive shocks and negative residuals corresponding to negative shocks; and (2) estimates of elasticities of intertemporal rates of labor substitution.

Positive productivity shocks induce wage increases, and negative shocks induce wage decreases. Responding to the shifts in wages, presumed to be temporary, workers increase the amount of labor supplied in response to above-trend increases in wages and decrease the amount of labor supplied in response to below-trend increases in wages. The higher the elasticity of intertemporal labor substitution, the greater the supply response to a given deviation of actual wages from the expected trend rate of increase in wages. Real-business-cycle theorists used calibration techniques to obtain estimates labor-supply elasticities from microeconomic studies.

The real-business-cycle variant of the Lucasian research program embraced all the dubious methodological precepts of its parent while adding further dubious practices of its own. Most problematic, of course, is the methodological insistence that equilibrium is necessarily and continuously maintained, which is possible only if all agents correctly anticipate future prices and wages. If equilibrium is not continuously maintained, then Solow residuals may capture not productivity shocks, but, depending on their sign, either movements away from, or toward, equilibrium. In disequilibrium, labor and capital may be held idle by firms in anticipation of subsequent increases in output, so that measured productivity does not reflect the state of technology, but the inherent inefficiency of unemployment resulting from coordination failure, a contingency explicitly deemed by Lucasian methodology to be off limits.

Such ad hocery is generally frowned upon by scientists. Ad hoc assumptions are not always unscientific or unproductive, as famously exemplified by the discovery of Neptune. But in the latter case, the ad hoc assumption was subject to empirical testing; Neptune might not have been there waiting to be discovered. But no independent test of the presence or absence of a technology shock, aside from the Solow residual itself, is available. Even this situation might be tolerable, if Lucasian methodology permitted one to inquire whether the world or an economy might not be in an equilibrium state. But Lucasian methodology forbids such an inquiry.

The use of calibration to estimate intertemporal labor-supply elasticities from microeconomic studies are also extremely dubious, because microeconomic estimates of labor-supply elasticities are typically made under conditions approximating equilibrium, when workers have some flexibility in choosing whether to work more or less in the present or in the future. Those are not the conditions in which workers find themselves in periods of high aggregate unemployment, and are, therefore, not confident that they will retain their jobs in the present and near future, or, if they lose their jobs, that they will succeed in finding another job at an acceptable wage. The calibrated estimates of labor-supply elasticity are, for exactly the reasons identified in the Lucas Critique, unreliable for use in replicating time series.

An early real-business-cycle theorist Charles Plosser (“Understanding Real Business Cycles”) responded to criticisms of the RBC techniques as follows:

If the measured technological shocks are poor estimates (that is, they are confounded by other factors such as “demand” shocks, preference shocks or change in government policies, and so on) then feeding these values into our real business cycle model should result in poor predictions for the behavior of consumption, investment, hours worked, wages and output.

Plosser’s response ignores the question-begging nature of the RBC model; the supposed productivity shocks that cause cyclical fluctuations in the model are identified by the very time series that the model purports to explain. Nor does calibration provide clear and unambiguous estimates that the modeler can transfer without exercising discretion about which studies and which values to insert into an RBC model. Plosser’s defense of RBC is not so very different from the sort of defense made on behalf of the highly accurate epicyclical replications of observed planetary movements, replications that were based largely on the ingenuity and diligence of the epicyclist.

Eventually, the methodological prohibitions against heliocentrism were overcome. Perhaps, one day, the methodological prohibitions against non-reductionist macroeconomic theories will also be overcome.

Lucasian macroeconomics gained not only ascendance, but dominance, on the basis of  conceptual and methodological misunderstandings. The continued dominance of the offspring of the early Lucasian theories has been portrayed as a scientific advance by Lucas and his followers. In fact, the theories and the supposed methodological imperatives by which they have been justified are scientifically suspect because they rely on circular, question-begging arguments and reject alternative theories based on specious reductionist arguments.

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Hayek and the Lucas Critique

In March I wrote a blog post, “Robert Lucas and the Pretense of Science,” which was a draft proposal for a paper for a conference on Coordination Issues in Historical Perspectives to be held in September. My proposal having been accepted I’m going to post sections of the paper on the blog in hopes of getting some feedback as a write the paper. What follows is the first of several anticipated draft sections.

Just 31 years old, F. A. Hayek rose rapidly to stardom after giving four lectures at the London School of Economics at the invitation of his almost exact contemporary, and soon to be best friend, Lionel Robbins. Hayek had already published several important works, of which Hayek ([1928], 1984) laying out basic conceptualization of an intertemporal equilibrium almost simultaneously with the similar conceptualizations of two young Swedish economists, Gunnar Myrdal (1927) and Erik Lindahl [1929] 1939), was the most important.

Hayek’s (1931a) LSE lectures aimed to provide a policy-relevant version of a specific theoretical model of the business cycle that drew upon but was a just a particular instantiation of the general conceptualization developed in his 1928 contribution. Delivered less than two years after the start of the Great Depression, Hayek’s lectures gave a historical overview of the monetary theory of business-cycles, an account of how monetary disturbances cause real effects, and a skeptical discussion of how monetary policy might, or more likely might not, counteract or mitigate the downturn then underway. It was Hayek’s skepticism about countercyclical policy that helped make those lectures so compelling but also elicited such a hostile reaction during the unfolding crisis.

The extraordinary success of his lectures established Hayek’s reputation as a preeminent monetary theorist alongside established figures like Irving Fisher, A. C. Pigou, D. H. Robertson, R. G. Hawtrey, and of course J. M. Keynes. Hayek’s (1931b) critical review of Keynes’s just published Treatise on Money (1930), published soon after his LSE lectures, provoking a heated exchange with Keynes, himself, showed him to be a skilled debater and a powerful polemicist.

Hayek’s meteoric rise was, however, followed by a rapid fall from the briefly held pinnacle of his early career. Aside from the imperfections and weaknesses of his own theoretical framework (Glasner and Zimmerman 2021), his diagnosis of the causes of the Great Depression (Glasner and Batchelder [1994] 2021a, 2021b) and his policy advice (Glasner 2021) were theoretically misguided and inappropriate to the deflationary conditions underlying the Great Depression).

Nevertheless, Hayek’s conceptualization of intertemporal equilibrium provided insight into the role not only of prices, but also of price expectations, in accounting for cyclical fluctuations. In Hayek’s 1931 version of his cycle theory, the upturn results from bank-financed investment spending enabled by monetary expansion that fuels an economic boom characterized by increased total spending, output and employment. However, owing to resource constraints, misalignments between demand and supply, and drains of bank reserves, the optimistic expectations engendered by the boom are doomed to eventual disappointment, whereupon a downturn begins.

I need not engage here with the substance of Hayek’s cycle theory which I have criticized elsewhere (see references above). But I would like to consider his 1934 explanation, responding to Hansen and Tout (1933), of why a permanent monetary expansion would be impossible. Hansen and Tout disputed Hayek’s contention that monetary expansion would inevitably lead to a recession, because an unconstrained monetary authority would not be forced by a reserve drain to halt a monetary expansion, allowing a boom to continue indefinitely, permanently maintaining an excess of investment over saving.

Hayek (1934) responded as follows:

[A] constant rate of forced saving (i.e., investment in excess of voluntary saving) a rate of credit expansion which will enable the producers of intermediate products, during each successive unit of time, to compete successfully with the producers of consumers’ goods for constant additional quantities of the original factors of production. But as the competing demand from the producers of consumers’ goods rises (in terms of money) in consequence of, and in proportion to, the preceding increase of expenditure on the factors of production (income), an increase of credit which is to enable the producers of intermediate products to attract additional original factors, will have to be, not only absolutely but even relatively, greater than the last increase which is now reflected in the increased demand for consumers’ goods. Even in order to attract only as great a proportion of the original factors, i.e., in order merely to maintain the already existing capital, every new increase would have to be proportional to the last increase, i.e., credit would have to expand progressively at a constant rate. But in order to bring about constant additions to capital, it would have to do more: it would have to increase at a constantly increasing rate. The rate at which this rate of increase must increase would be dependent upon the time lag between the first expenditure of the additional money on the factors of production and the re-expenditure of the income so created on consumers’ goods. . . .

But I think it can be shown . . . that . . . such a policy would . . . inevitably lead to a rapid and progressive rise in prices which, in addition to its other undesirable effects, would set up movements which would soon counteract, and finally more than offset, the “forced saving.” That it is impossible, either for a simple progressive increase of credit which only helps to maintain, and does not add to, the already existing “forced saving,” or for an increase in credit at an increasing rate, to continue for a considerable time without causing a rise in prices, results from the fact that in neither case have we reason to assume that the increase in the supply of consumers’ goods will keep pace with the increase in the flow of money coming on to the market for consumers’ goods. Insofar as, in the second case, the credit expansion leads to an ultimate increase in the output of consumers’ goods, this increase will lag considerably and increasingly (as the period of production increases) behind the increase in the demand for them. But whether the prices of consumers’ goods will rise faster or slower, all other prices, and particularly the prices of the original factors of production, will rise even faster. It is only a question of time when this general and progressive rise of prices becomes very rapid. My argument is not that such a development is inevitable once a policy of credit expansion is embarked upon, but that it has to be carried to that point if a certain result—a constant rate of forced saving, or maintenance without the help of voluntary saving of capital accumulated by forced saving—is to be achieved.

Friedman’s (1968) argument why monetary expansion could not permanently reduce unemployment below its “natural rate” closely mirrors (though he almost certainly never read) Hayek’s argument that monetary expansion could not permanently maintain a rate of investment spending above the rate of voluntary saving. Generalizing Friedman’s logic, Lucas (1976) transformed it into a critique of using econometric estimates of relationships like the Phillips Curve, the specific target of Friedman’s argument, as a basis for predicting the effects of policy changes, such estimates being conditional on implicit expectational assumptions which aren’t invariant to the policy changes derived from those estimates.

Restated differently, such econometric estimates are reduced forms that, without identifying restrictions, do not allow the estimated regression coefficients to be used to predict the effects of a policy change.

Only by specifying, and estimating, the deep structural relationships governing the response to a policy change could the effect of a potential policy change be predicted with some confidence that the prediction would not prove erroneous because of changes in the econometrically estimated relationships once agents altered their behavior in response to the policy change.

In his 1974 Nobel Lecture, Hayek offered a similar explanation of why an observed correlation between aggregate demand and employment provides no basis for predicting the effect of policies aimed at increasing aggregate demand and reducing unemployment if the likely changes in structural relationships caused by those policies are not taken into account.

[T]he very measures which the dominant “macro-economic” theory has recommended as a remedy for unemployment, namely the increase of aggregate demand, have become a cause of a very extensive misallocation of resources which is likely to make later large-scale unemployment inevitable. The continuous injection . . . money at points of the economic system where it creates a temporary demand which must cease when the increase of the quantity of money stops or slows down, together with the expectation of a continuing rise of prices, draws labour . . . into employments which can last only so long as the increase of the quantity of money continues at the same rate – or perhaps even only so long as it continues to accelerate at a given rate. What this policy has produced is not so much a level of employment that could not have been brought about in other ways, as a distribution of employment which cannot be indefinitely maintained . . . The fact is that by a mistaken theoretical view we have been led into a precarious position in which we cannot prevent substantial unemployment from re-appearing; not because . . . this unemployment is deliberately brought about as a means to combat inflation, but because it is now bound to occur as a deeply regrettable but inescapable consequence of the mistaken policies of the past as soon as inflation ceases to accelerate.

Hayek’s point that an observed correlation between the rate of inflation (a proxy for aggregate demand) and unemployment cannot be relied on in making economic policy was articulated succinctly and abstractly by Lucas as follows:

In short, one can imagine situations in which empirical Phillips curves exhibit long lags and situations in which there are no lagged effects. In either case, the “long-run” output inflation relationship as calculated or simulated in the conventional way has no bearing on the actual consequences of pursing a policy of inflation.

[T]he ability . . . to forecast consequences of a change in policy rests crucially on the assumption that the parameters describing the new policy . . . are known by agents. Over periods for which this assumption is not approximately valid . . . empirical Phillips curves will appear subject to “parameter drift,” describable over the sample period, but unpredictable for all but the very near future.

The lesson inferred by both Hayek and Lucas was that Keynesian macroeconomic models of aggregate demand, inflation and employment can’t reliably guide economic policy and should be discarded in favor of models more securely grounded in the microeconomic theories of supply and demand that emerged from the Marginal Revolution of the 1870s and eventually becoming the neoclassical economic theory that describes the characteristics of an efficient, decentralized and self-regulating economic system. This was the microeconomic basis on which Hayek and Lucas believed macroeconomic theory ought to be based instead of the Keynesian system that they were criticizing. But that superficial similarity obscures the profound methodological and substantive differences between them.

Those differences will be considered in future posts.

References

Friedman, M. 1968. “The Role of Monetary Policy.” American Economic Review 58(1):1-17.

Glasner, D. 2021. “Hayek, Deflation, Gold and Nihilism.” Ch. 16 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. [1994] 2021. “Debt, Deflation, the Gold Standard and the Great Depression.” Ch. 13 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. 2021. “Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel?” Ch. 14 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Zimmerman, P. 2021.  “The Sraffa-Hayek Debate on the Natural Rate of Interest.” Ch. 15 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Hansen, A. and Tout, H. 1933. “Annual Survey of Business Cycle Theory: Investment and Saving in Business Cycle Theory,” Econometrica 1(2): 119-47.

Hayek, F. A. [1928] 1984. “Intertemporal Price Equilibrium and Movements in the Value of Money.” In R. McCloughry (Ed.), Money, Capital and Fluctuations: Early Essays (pp. 171–215). Routledge.

Hayek, F. A. 1931a. Prices and Produciton. London: Macmillan.

Hayek, F. A. 1931b. “Reflections on the Pure Theory of Money of Mr. Keynes.” Economica 33:270-95.

Hayek, F. A. 1934. “Capital and Industrial Fluctuations.” Econometrica 2(2): 152-67.

Keynes, J. M. 1930. A Treatise on Money. 2 vols. London: Macmillan.

Lindahl. E. [1929] 1939. “The Place of Capital in the Theory of Price.” In E. Lindahl, Studies in the Theory of Money and Capital. George, Allen & Unwin.

Lucas, R. E. [1976] 1985. “Econometric Policy Evaluation: A Critique.” In R. E. Lucas, Studies in Business-Cycle Theory. Cambridge: MIT Press.

Myrdal, G. 1927. Prisbildningsproblemet och Foranderligheten (Price Formation and the Change Factor). Almqvist & Wicksell.

On the Labor Supply Function

The bread and butter of economics is demand and supply. The basic idea of a demand function (or a demand curve) is to describe a relationship between the price at which a given product, commodity or service can be bought and the quantity that will bought by some individual. The standard assumption is that the quantity demanded increases as the price falls, so that the demand curve is downward-sloping, but not much more can be said about the shape of a demand curve unless special assumptions are made about the individual’s preferences.

Demand curves aren’t natural phenomena with concrete existence; they are hypothetical or notional constructs pertaining to individual preferences. To pass from individual demands to a market demand for a product, commodity or service requires another conceptual process summing the quantities demanded by each individual at any given price. The conceptual process is never actually performed, so the downward-sloping market demand curve is just presumed, not observed as a fact of nature.

The summation process required to pass from individual demands to a market demand implies that the quantity demanded at any price is the quantity demanded when each individual pays exactly the same price that every other demander pays. At a price of $10/widget, the widget demand curve tells us how many widgets would be purchased if every purchaser in the market can buy as much as desired at $10/widget. If some customers can buy at $10/widget while others have to pay $20/widget or some can’t buy any widgets at any price, then the quantity of widgets actually bought will not equal the quantity on the hypothetical widget demand curve corresponding to $10/widget.

Similar reasoning underlies the supply function or supply curve for any product, commodity or service. The market supply curve is built up from the preferences and costs of individuals and firms and represents the amount of a product, commodity or service that would be willing to offer for sale at different prices. The market supply curve is the result of a conceptual summation process that adds up the amounts that would be hypothetically be offered for sale by every agent at different prices.

The point of this pedantry is to emphasize the that the demand and supply curves we use are drawn on the assumption that a single uniform market price prevails in every market and that all demanders and suppliers can trade without limit at those prices and their trading plans are fully executed. This is the equilibrium paradigm underlying the supply-demand analysis of econ 101.

Economists quite unself-consciously deploy supply-demand concepts to analyze labor markets in a variety of settings. Sometimes, if the labor market under analysis is limited to a particular trade or a particular skill or a particular geographic area, the supply-demand framework is reasonable and appropriate. But when applied to the aggregate labor market of the whole economy, the supply-demand framework is inappropriate, because the ceteris-paribus proviso (all prices other than the price of the product, commodity or service in question are held constant) attached to every supply-demand model is obviously violated.

Thoughtlessly applying a simple supply-demand model to analyze the labor market of an entire economy leads to the conclusion that widespread unemployment, when some workers are unemployed, but would have accepted employment offers at wages that comparably skilled workers are actually receiving, implies that wages are above the market-clearing wage level consistent with full employment.

The attached diagram for simplest version of this analysis. The market wage (W1) is higher than the equilibrium wage (We) at which all workers willing to accept that wage could be employed. The difference between the number of workers seeking employment at the market wage (LS) and the number of workers that employers seek to hire (LD) measures the amount of unemployment. According to this analysis, unemployment would be eliminated if the market wage fell from W1 to We.

Applying supply-demand analysis to aggregate unemployment fails on two levels. First, workers clearly are unable to execute their plans to offer their labor services at the wage at which other workers are employed, so individual workers are off their supply curves. Second, it is impossible to assume, supply-demand analysis requires, that all other prices and incomes remain constant so that the demand and supply curves do not move as wages and employment change. When multiple variables are mutually interdependent and simultaneously determined, the analysis of just two variables (wages and employment) cannot be isolated from the rest of the system. Focusing on the wage as the variable that needs to change to restore full employment is an example of the tunnel vision.

Keynes rejected the idea that economy-wide unemployment could be eliminated by cutting wages. Although Keynes’s argument against wage cuts as a cure for unemployment was flawed, he did have at least an intuitive grasp of the basic weakness in the argument for wage cuts: that high aggregate unemployment is not usefully analyzed as a symptom of excessive wages. To explain why wage cuts aren’t the cure for high unemployment, Keynes introduced a distinction between voluntary and involuntary unemployment.

Forty years later, Robert Lucas began his effort — not the first such effort, but by far the most successful — to discredit the concept of involuntary unemployment. Here’s an early example:

Keynes [hypothesized] that measured unemployment can be decomposed into two distinct components: ‘voluntary’ (or frictional) and ‘involuntary’, with full employment then identified as the level prevailing when involuntary employment equals zero. It seems appropriate, then, to begin by reviewing Keynes’ reasons for introducing this distinction in the first place. . . .

Accepting the necessity of a distinction between explanations for normal and cyclical unemployment does not, however, compel one to identify the first as voluntary and the second as involuntary, as Keynes goes on to do. This terminology suggests that the key to the distinction lies in some difference in the way two different types of unemployment are perceived by workers. Now in the first place, the distinction we are after concerns sources of unemployment, not differentiated types. . . .[O]ne may classify motives for holding money without imagining that anyone can subdivide his own cash holdings into “transactions balances,” “precautionary balances”, and so forth. The recognition that one needs to distinguish among sources of unemployment does not in any way imply that one needs to distinguish among types.

Nor is there any evident reason why one would want to draw this distinction. Certainly the more one thinks about the decision problem facing individual workers and firms the less sense this distinction makes. The worker who loses a good job in prosperous time does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesirable capital loss. Nevertheless, the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

Lucas, Studies in Business Cycle Theory, pp. 241-43

Consider this revision of Lucas’s argument:

The expressway driver who is slowed down in a traffic jam does not volunteer to be in this situation; he has suffered a waste of his time. Nevertheless, the driver can get off the expressway at the next exit to find an alternate route. Thus, there is an involuntary element in every traffic jam, in the sense that no one chooses to waste time; there is also a voluntary element in all traffic jams, in the sense that however stuck one is in traffic, one can always take the next exit on the expressway.

What is lost on Lucas is that, for an individual worker, taking a wage cut to avoid being laid off by the employer accomplishes nothing, because the willingness of a single worker to accept a wage cut would not induce the employer to increase output and employment. Unless all workers agreed to take wage cuts, a wage cut to one employee would have not cause the employer to reconsider its plan to reduce in the face of declining demand for its product. Only the collective offer of all workers to accept a wage cut would induce an output response by the employer and a decision not to lay off part of its work force.

But even a collective offer by all workers to accept a wage cut would be unlikely to avoid an output reduction and layoffs. Consider a simple case in which the demand for the employer’s output declines by a third. Suppose the employer’s marginal cost of output is half the selling price (implying a demand elasticity of -2). Assume that demand is linear. With no change in its marginal cost, the firm would reduce output by a third, presumably laying off up to a third of its employees. Could workers avoid the layoffs by accepting lower wages to enable the firm to reduce its price? Or asked in another way, how much would marginal cost have to fall for the firm not to reduce output after the demand reduction?

Working out the algebra, one finds that for the firm to keep producing as much after a one-third reduction in demand, the firm’s marginal cost would have to fall by two-thirds, a decline that could only be achieved by a radical reduction in labor costs. This is surely an oversimplified view of the alternatives available to workers and employers, but the point is that workers facing a layoff after the demand for the product they produce have almost no ability to remain employed even by collectively accepting a wage cut.

That conclusion applies a fortiori when decisions whether to accept a wage cut are left to individual workers, because the willingness of workers individually to accept a wage cut is irrelevant to their chances of retaining their jobs. Being laid off because of decline in the demand for the product a worker is producing is a much different situation from being laid off, because a worker’s employer is shifting to a new technology for which the workers lack the requisite skills, and can remain employed only by accepting re-assignment to a lower-paying job.

Let’s follow Lucas a bit further:

Keynes, in chapter 2, deals with the situation facing an individual unemployed worker by evasion and wordplay only. Sentences like “more labor would, as a rule, be forthcoming at the existing money wage if it were demanded” are used again and again as though, from the point of view of a jobless worker, it is unambiguous what is meant by “the existing money wage.” Unless we define an individual’s wage rate as the price someone else is willing to pay him for his labor (in which case Keynes’s assertion is defined to be false to be false), what is it?

Lucas, Id.

I must admit that, reading this passage again perhaps 30 or more years after my first reading, I’m astonished that I could have once read it without astonishment. Lucas gives the game away by accusing Keynes of engaging in evasion and wordplay before embarking himself on sustained evasion and wordplay. The meaning of the “existing money wage” is hardly ambiguous, it is the money wage the unemployed worker was receiving before losing his job and the wage that his fellow workers, who remain employed, continue to receive.

Is Lucas suggesting that the reason that the worker lost his job while his fellow workers who did not lose theirs is that the value of his marginal product fell but the value of his co-workers’ marginal product did not? Perhaps, but that would only add to my astonishment. At the current wage, employers had to reduce the number of workers until their marginal product was high enough for the employer to continue employing them. That was not necessarily, and certainly not primarily, because some workers were more capable than those that were laid off.

The fact is, I think, that Keynes wanted to get labor markets out of the way in chapter 2 so that he could get on to the demand theory which really interested him.

More wordplay. Is it fact or opinion? Well, he says that thinks it’s a fact. In other words, it’s really an opinion.

This is surely understandable, but what is the excuse for letting his carelessly drawn distinction between voluntary and involuntary unemployment dominate aggregative thinking on labor markets for the forty years following?

Mr. Keynes, really, what is your excuse for being such an awful human being?

[I]nvoluntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to ‘explain’ the theoretical constructs of our predecessor, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Lucas, Id.

Let’s rewrite this paragraph with a few strategic word substitutions:

Heliocentrism is not a fact or phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Copernicus introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon the observed movement of the planets in the heavens. Is it the task of modern theoretical physics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Copernicus died in 1542 shortly before his work on heliocentrism was published. Galileo’s works on heliocentrism were not published until 1610 almost 70 years after Copernicus published his work. So, under Lucas’s forty-year time limit, Galileo had no business trying to explain Copernican heliocentrism which had still not yet proven fruitful. Moreover, even after Galileo had published his works, geocentric models were providing predictions of planetary motion as good as, if not better than, the heliocentric models, so decisive empirical evidence in favor of heliocentrism was still lacking. Not until Newton published his great work 70 years after Galileo, and 140 years after Copernicus, was heliocentrism finally accepted as fact.

In summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problem they face. One cannot, even conceptually, arrive at a usable definition of full employment

Lucas, Id.

Belying his claim to be introducing scientific rigor into macroeocnomics, Lucas restorts to an extended scholastic inquiry into whether an unemployed worker can really ever be unemployed involuntarily. Based on his scholastic inquiry into the nature of volunatriness, Lucas declares that Keynes was mistaken because would not accept the discipline of optimization and equilibrium. But Lucas’s insistence on the discipline of optimization and equilibrium is misplaced unless he can provide an actual mechanism whereby the notional optimization of a single agent can be reconciled with notional optimization of other individuals.

It was his inability to provide any explanation of the mechanism whereby the notional optimization of individual agents can be reconciled with the notional optimizations of other individual agents that led Lucas to resort to rational expectations to circumvent the need for such a mechanism. He successfully persuaded the economics profession that evading the need to explain such a reconciliation mechanism, the profession would not be shirking their explanatory duty, but would merely be fulfilling their methodological obligation to uphold the neoclassical axioms of rationality and optimization neatly subsumed under the heading of microfoundations.

Rational expectations and microfoundations provided the pretext that could justify or at least excuse the absence of any explanation of how an equilibrium is reached and maintained by assuming that the rational expectations assumption is an adequate substitute for the Walrasian auctioneer, so that each and every agent, using the common knowledge (and only the common knowledge) available to all agents, would reliably anticipate the equilibrium price vector prevailing throughout their infinite lives, thereby guaranteeing continuous equilibrium and consistency of all optimal plans. That feat having been securely accomplished, it was but a small and convenient step to collapse the multitude of individual agents into a single representative agent, so that the virtue of submitting to the discipline of optimization could find its just and fitting reward.

Three Propagation Mechanisms in Lucas and Sargent with a Response from Brad DeLong

UPDATE (4/3/2022): Reupping this post with the response to my query sent by Brad DeLong.

I’m writing this post in hopes of eliciting some guidance from readers about the three propagation mechanisms to which Robert Lucas and Thomas Sargent refer in their famous 1978 article, “After Keynesian Macroeconomics.” The three propagation mechanisms were mentioned to parry criticisms of the rational-expectations principle underlying the New Classical macroeconomics that Lucas and Sargent were then developing as an alternative to Keynesian macroeconomics. I am wondering how subsequent research has dealt with these propagation mechanisms and how they are now treated in current macro-theory. Here is the relevant passage from Lucas and Sargent:

A second line of criticism stems from the correct observation that if agents’ expectations are rational and if their information sets include lagged values of the variable being forecast, then agents’ forecast errors must be a serially uncorrelated random process. That is, on average there must be no detectable relationships between a period’s forecast error and any previous period’s. This feature has led several critics to conclude that equilibrium models cannot account for more than an insignificant part of the highly serially correlated movements we observe in real output, employment, unemployment, and other series. Tobin (1977, p. 461) has put the argument succinctly:

One currently popular explanation of variations in employment is temporary confusion of relative and absolute prices. Employers and workers are fooled into too many jobs by unexpected inflation, but only until they learn it affects other prices, not just the prices of what they sell. The reverse happens temporarily when inflation falls short of expectation. This model can scarcely explain more than transient disequilibrium in labor markets.

So how can the faithful explain the slow cycles of unemployment we actually observe? Only by arguing that the natural rate itself fluctuates, that variations in unemployment rates are substantially changes in voluntary, frictional, or structural unemployment rather than in involuntary joblessness due to generally deficient demand.

The critics typically conclude that the theory only attributes a very minor role to aggregate demand fluctuations and necessarily depends on disturbances to aggregate supply to account for most of the fluctuations in real output over the business cycle. “In other words,” as Modigliani (1977) has said, “what happened to the United States in the 1930’s was a severe attack of contagious laziness.” This criticism is fallacious because it fails to distinguish properly between sources of impulses and propagation mechanisms, a distinction stressed by Ragnar Frisch in a classic 1933 paper that provided many of the technical foundations for Keynesian macroeconometric models. Even though the new classical theory implies that the forecast errors which are the aggregate demand impulses are serially uncorrelated, it is certainly logically possible that propagation mechanisms are at work that convert these impulses into serially correlated movements in real variables like output and employment. Indeed, detailed theoretical work has already shown that two concrete propagation mechanisms do precisely that.

One mechanism stems from the presence of costs to firms of adjusting their stocks of capital and labor rapidly. The presence of these costs is known to make it optimal for firms to spread out over time their response to the relative price signals they receive. That is, such a mechanism causes a firm to convert the serially uncorrelated forecast errors in predicting relative prices into serially correlated movements in factor demands and output.

A second propagation mechanism is already present in the most classical of economic growth models. Households’ optimal accumulation plans for claims on physical capital and other assets convert serially uncorrelated impulses into serially correlated demands for the accumulation of real assets. This happens because agents typically want to divide any unexpected changes in income partly between consuming and accumulating assets. Thus, the demand for assets next period depends on initial stocks and on unexpected changes in the prices or income facing agents. This dependence makes serially uncorrelated surprises lead to serially correlated movements in demands for physical assets. Lucas (1975) showed how this propagation mechanism readily accepts errors in forecasting aggregate demand as an impulse source.

A third likely propagation mechanism has been identified by recent work in search theory. (See, for example, McCall 1965, Mortensen 1970, and Lucas and Prescott 1974.) Search theory tries to explain why workers who for some reason are without jobs find it rational not necessarily to take the first job offer that comes along but instead to remain unemployed for awhile until a better offer materializes. Similarly, the theory explains why a firm may find it optimal to wait until a more suitable job applicant appears so that vacancies persist for some time. Mainly for technical reasons, consistent theoretical models that permit this propagation mechanism to accept errors in forecasting aggregate demand as an impulse have not yet been worked out, but the mechanism seems likely eventually to play an important role in a successful model of the time series behavior of the unemployment rate. In models where agents have imperfect information, either of the first two mechanisms and probably the third can make serially correlated movements in real variables stem from the introduction of a serially uncorrelated sequence of forecasting errors. Thus theoretical and econometric models have been constructed in which in principle the serially uncorrelated process of forecasting errors can account for any proportion between zero and one of the steady state variance of real output or employment. The argument that such models must necessarily attribute most of the variance in real output and employment to variations in aggregate supply is simply wrong logically.

My problem with the Lucas-Sargent argument is that even if the deviations from a long-run equilibrium path are serially correlated, shouldn’t those deviations be diminishing over time after the initial disturbance. Can these propagation mechanisms account for amplification of the initial disturbance before the adjustment toward the equilibrium path begins? I would gratefully welcome any responses.

David Glasner has a question about the “rational expectations” business-cycle theories developed in the 1970s:

David GlasnerThree Propagation Mechanisms in Lucas & Sargent: ‘I’m… hop[ing for]… some guidance… about… propagation mechanisms… [in] Robert Lucas and Thomas Sargent[‘s]… “After Keynesian Macroeconomics.”… 

The critics typically conclude that the theory only attributes a very minor role to aggregate demand fluctuations and necessarily depends on disturbances to aggregate supply…. [But] even though the new classical theory implies that the forecast errors which are the aggregate demand impulses are serially uncorrelated, it is certainly logically possible that propagation mechanisms are at work that convert these impulses into serially correlated movements in real variables like output and employment… the presence of costs to firms of adjusting their stocks of capital and labor rapidly…. accumulation plans for claims on physical capital and other assets convert serially uncorrelated impulses into serially correlated demands for the accumulation of real assets… workers who for some reason are without jobs find it rational not necessarily to take the first job offer that comes along but instead to remain unemployed for awhile until a better offer materializes…. In principle the serially uncorrelated process of forecasting errors can account for any proportion between zero and one of the [serially correlated] steady state variance of real output or employment. The argument that such models must necessarily attribute most of the variance in real output and employment to variations in aggregate supply is simply wrong logically…

My problem with the Lucas-Sargent argument is that even if the deviations from a long-run equilibrium path are serially correlated, shouldn’t those deviations be diminishing over time after the initial disturbance? Can these propagation mechanisms account for amplification of the initial disturbance before the adjustment toward the equilibrium path begins? I would gratefully welcome any responses…

In some ways this is of only history-of-thought interest. For Lucas and Prescott, at least, had within five years of the writing of “After Keynesian Macroeconomics” decided that the critics were right: that their models of how mistaken decisions driven by serially-uncorrelated forecast errors could not account for the bulk of the serially correlated business-cycle variance of real output and employment, and they needed to shift to studying real business cycle theory instead of price-misperceptions theory. The first problem was that time-series methods generated shocks that came at the wrong times to explain recessions. The second problem was that the propagation mechanisms did not amplify but rather damped the shock: at best they produced some kind of partial-adjustment process that extended the impact of a shock on real variables to N periods and diminished its impact in any single period to 1/N. There was no… what is the word?…. multiplier in the system.

It was stunning to watch in real time in the early 1980s. As Paul Volcker hit the economy on the head with the monetary-stringency brick, repeatedly, quarter after quarter; as his serially correlated and hence easily anticipated policy moves had large and highly serially correlated effects on output; Robert Lucas and company simply… pretended it was not happening: that monetary policy was not having major effects on output and employment in the first half of the 1980s, and that it was not the case thjat the monetary policies that were having such profound real impacts had no plausible interpretation as “surprises” leading to “misperceptions”. Meanwhile, over in the other corner, Robert Barro was claiming that he saw no break in the standard pattern of federal deficits from the Reagan administration’s combination of tax cuts plus defense buildup.

Those of us who were graduate students at the time watched this, and drew conclusions about the likelihood that Lucas, Prescott, and company had good enough judgment and close enough contact with reality that their proposed “real business cycle” research program would be a productive one—conclusions that, I think, time has proved fully correct.

Behind all this, of course, was this issue: the “microfoundations” of the Lucas “island economy” model were totally stupid: people are supposed to “misperceive” relative prices because they know the nominal prices at which they sell but do not know the nominal prices at which they buy, hence people confuse a monetary shock-generated rise in the nominal price level with an increase in the real price of what they produce, and hence work harder and longer and produce more? (I forget who it was who said at the time that the model seemed to require a family in which the husband worked and the wife went to the grocery store and the husband never listened to anything the wife said.) These so-called “microfoundations” could only be rationally understood as some kind of metaphor. But what kind of metaphor? And why should it have any special status, and claim on our attention?

Paul Krugman’s judgment on the consequences of this intellectual turn is even harsher than mine:

What made the Dark Ages dark was the fact that so much knowledge had been lost, that so much known to the Greeks and Romans had been forgotten by the barbarian kingdoms that followed. And that’s what seems to have happened to macroeconomics in much of the economics profession. The knowledge that S=I doesn’t imply the Treasury view—the general understanding that macroeconomics is more than supply and demand plus the quantity equation — somehow got lost in much of the profession. I’m tempted to go on and say something about being overrun by barbarians in the grip of an obscurantist faith…

I would merely say that it has left us, over what is now two generations, with a turn to DSGE models—Dynamic Stochastic General Equilibrium—that must satisfy a set of formal rhetorical requirements that really do not help us fit the data, and that it gave many, many people an excuse not to read and hence a license to remain ignorant of James Tobin.

Brad

====

Preorder Slouching Towards Utopia: An Economic History of the Long 20th Century, 1870-2010

About: <https://braddelong.substack.com/about>

Involuntary Unemployment, the Mind-Body Problem, and Rubbernecking

The term involuntary unemployment was introduced by Keynes in the General Theory as the name he attached to the phenomenon of high cyclical unemployment during the downward phase of business cycle. He didn’t necessarily restrict the term to unemployment at the trough of the business cycle, because he at least entertained the possibility of underemployment equilibrium, presumably to indicate that involuntary unemployment could be a long-lasting, even permanent, phenomenon, unless countered by deliberate policy measures.

Keynes provided an explicit definition of involuntary unemployment in the General Theory, a definition that is far from straightforward, but boils down to the following: if unemployment would not fall as a result of a cut in nominal wages, but would fall as a result of a cut real wages brought about by an increase in the price level, then there is involuntary unemployment. Thus, Keynes explicitly excluded from his definition of involuntary unemployment, unemployment caused by minimum wages or labor-union monopoly power.

Keynes’s definition has always been controversial, because it implies that wage stickiness or rigidity is not the cause of unemployment. There have been at least two approaches to Keynes’s definition of involuntary that now characterize the views of mainstream macroeconomists to involuntary unemployment.

The first is rationalization. Examples of such rationalization are search and matching theories of unemployment, implicit-contract theories, and efficiency-wage theories. The problem with such rationalizations is that they are rationalizations of why nominal wages are sticky or rigid. But Keynes’s definition of involuntary unemployment was based on the premise that reducing nominal wages does not reduce involuntary unemployment, so the rationalizations of why nominal wages aren’t cut to reduce unemployment seem sort of irrelevant to the concept of involuntary unemployment, or, at least to Keynes’s understanding of the concept.

The second is denial. Perhaps the best example of such denial is provided by Robert Lucas. Here’s his take on involuntary unemployment.

The worker who loses a good job in prosperous times does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesired capital loss. Nevertheless the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

R. E. Lucas, Studies in Business-Cycle Theory, p. 242

Because Lucas believes that it is impossible to determine the extent to which any observed unemployment reflects a voluntary choice by the unemployed worker, or is involuntarily imposed on the worker by a social process beyond the worker’s control, he rejects the distinction as artificial and lacking empirical content, the product of Keynes’s overactive imagination. As such, the concept requires no explanation by economists.

Involuntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope that it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined?

Id., p. 243

Lucas’s point seems to be that the distinction between voluntary and involuntary unemployment is purely semantic and doesn’t correspond to any observable phenomena that are of scientific interest. He may be right, and if he chooses to explain observed fluctuations in unemployment without reference to the distinction between voluntary and involuntary unemployment, he is under no obligation to accommodate the preferences of those economists that believe that involuntary unemployment is a real phenomenon that does require an explanation.

There is a real conflict of paradigms here. Surely Lucas is entitled to reject the Keynesian involuntary unemployment paradigm, and he may be right that trying to explain involuntary unemployment is unlikely to result in a progressive scientific research program. But it is not obvious that he is right.

One might argue that Lucas’s argument against involuntary unemployment resembles the argument of physicalists who deny the reality of mind and of consciousness. According to physicalists, only the brain and brain states exist. The mind and consciousness are just metaphysical concepts lacking any empirical basis. I happen to think that denying the reality of mind and consciousness borders on the absurd, but I am even less of an expert on the mind-body problem than I am on the existence of involuntary unemployment, so I won’t push this particular analogy any further.

Instead, let me try another analogy. Within the legal speed limits, drivers choose different speeds at which they drive while on a turnpike. Does it make sense to distinguish between situations in which they drive less than the speed limit voluntarily and situations in which they drive less than the speed limit involuntarily? Sometimes, there are physical bottlenecks (e.g., lane closures or other obstructions of traffic flows) that prevent cars on the turnpike from going as fast as drivers would have chosen to but for those physical constraints.

Would Lucas deny that the distinction between driving at less than the speed limit voluntarily and driving at less than the speed limit involuntarily is meaningful and empirically relevant?

There are also situations in which drivers involuntarily drive at less than the speed limit, not because of any physical bottleneck on traffic flows, but because of the voluntary choices of some drivers to slow down to rubberneck at something at the side of the turnpike but doesn’t physically obstruct the flow of traffic. Does the interaction between the voluntary choices of different drivers on the turnpike result in some drivers making involuntary choices?

I think the distinction between voluntary and involuntary choices may be relevant and meaningful in this context, but I know almost nothing about traffic-flow theory or queuing theory. I would welcome hearing what readers think about the relevance of the voluntary-involuntary distinction in the context of traffic-flow theory and whether they see any implications for such a distinction in unemployment theory.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

My Paper “Hayek, Hicks, Radner and Four Equilibrium Concepts” Is Now Available Online.

The paper, forthcoming in The Review of Austrian Economics, can be read online.

Here is the abstract:

Hayek was among the first to realize that for intertemporal equilibrium to obtain all agents must have correct expectations of future prices. Before comparing four categories of intertemporal, the paper explains Hayek’s distinction between correct expectations and perfect foresight. The four equilibrium concepts considered are: (1) Perfect foresight equilibrium of which the Arrow-Debreu-McKenzie (ADM) model of equilibrium with complete markets is an alternative version, (2) Radner’s sequential equilibrium with incomplete markets, (3) Hicks’s temporary equilibrium, as extended by Bliss; (4) the Muth rational-expectations equilibrium as extended by Lucas into macroeconomics. While Hayek’s understanding closely resembles Radner’s sequential equilibrium, described by Radner as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium seems to have been the natural extension of Hayek’s approach. The now dominant Lucas rational-expectations equilibrium misconceives intertemporal equilibrium, suppressing Hayek’s insights thereby retreating to a sterile perfect-foresight equilibrium.

And here is my concluding paragraph:

Four score and three years after Hayek explained how challenging the subtleties of the notion of intertemporal equilibrium and the elusiveness of any theoretical account of an empirical tendency toward intertemporal equilibrium, modern macroeconomics has now built a formidable theoretical apparatus founded on a methodological principle that rejects all the concerns that Hayek found so vexing denies that all those difficulties even exist. Many macroeconomists feel proud of what modern macroeconomics has achieved, but there is reason to think that the path trod by Hayek, Hicks and Radner could have led macroeconomics in a more fruitful direction than the one on which it has been led by Lucas and his associates.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com