Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those who interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably leads to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant assumptions relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Hayek and the Lucas Critique

In March I wrote a blog post, “Robert Lucas and the Pretense of Science,” which was a draft proposal for a paper for a conference on Coordination Issues in Historical Perspectives to be held in September. My proposal having been accepted I’m going to post sections of the paper on the blog in hopes of getting some feedback as a write the paper. What follows is the first of several anticipated draft sections.

Just 31 years old, F. A. Hayek rose rapidly to stardom after giving four lectures at the London School of Economics at the invitation of his almost exact contemporary, and soon to be best friend, Lionel Robbins. Hayek had already published several important works, of which Hayek ([1928], 1984) laying out basic conceptualization of an intertemporal equilibrium almost simultaneously with the similar conceptualizations of two young Swedish economists, Gunnar Myrdal (1927) and Erik Lindahl [1929] 1939), was the most important.

Hayek’s (1931a) LSE lectures aimed to provide a policy-relevant version of a specific theoretical model of the business cycle that drew upon but was a just a particular instantiation of the general conceptualization developed in his 1928 contribution. Delivered less than two years after the start of the Great Depression, Hayek’s lectures gave a historical overview of the monetary theory of business-cycles, an account of how monetary disturbances cause real effects, and a skeptical discussion of how monetary policy might, or more likely might not, counteract or mitigate the downturn then underway. It was Hayek’s skepticism about countercyclical policy that helped make those lectures so compelling but also elicited such a hostile reaction during the unfolding crisis.

The extraordinary success of his lectures established Hayek’s reputation as a preeminent monetary theorist alongside established figures like Irving Fisher, A. C. Pigou, D. H. Robertson, R. G. Hawtrey, and of course J. M. Keynes. Hayek’s (1931b) critical review of Keynes’s just published Treatise on Money (1930), published soon after his LSE lectures, provoking a heated exchange with Keynes, himself, showed him to be a skilled debater and a powerful polemicist.

Hayek’s meteoric rise was, however, followed by a rapid fall from the briefly held pinnacle of his early career. Aside from the imperfections and weaknesses of his own theoretical framework (Glasner and Zimmerman 2021), his diagnosis of the causes of the Great Depression (Glasner and Batchelder [1994] 2021a, 2021b) and his policy advice (Glasner 2021) were theoretically misguided and inappropriate to the deflationary conditions underlying the Great Depression).

Nevertheless, Hayek’s conceptualization of intertemporal equilibrium provided insight into the role not only of prices, but also of price expectations, in accounting for cyclical fluctuations. In Hayek’s 1931 version of his cycle theory, the upturn results from bank-financed investment spending enabled by monetary expansion that fuels an economic boom characterized by increased total spending, output and employment. However, owing to resource constraints, misalignments between demand and supply, and drains of bank reserves, the optimistic expectations engendered by the boom are doomed to eventual disappointment, whereupon a downturn begins.

I need not engage here with the substance of Hayek’s cycle theory which I have criticized elsewhere (see references above). But I would like to consider his 1934 explanation, responding to Hansen and Tout (1933), of why a permanent monetary expansion would be impossible. Hansen and Tout disputed Hayek’s contention that monetary expansion would inevitably lead to a recession, because an unconstrained monetary authority would not be forced by a reserve drain to halt a monetary expansion, allowing a boom to continue indefinitely, permanently maintaining an excess of investment over saving.

Hayek (1934) responded as follows:

[A] constant rate of forced saving (i.e., investment in excess of voluntary saving) a rate of credit expansion which will enable the producers of intermediate products, during each successive unit of time, to compete successfully with the producers of consumers’ goods for constant additional quantities of the original factors of production. But as the competing demand from the producers of consumers’ goods rises (in terms of money) in consequence of, and in proportion to, the preceding increase of expenditure on the factors of production (income), an increase of credit which is to enable the producers of intermediate products to attract additional original factors, will have to be, not only absolutely but even relatively, greater than the last increase which is now reflected in the increased demand for consumers’ goods. Even in order to attract only as great a proportion of the original factors, i.e., in order merely to maintain the already existing capital, every new increase would have to be proportional to the last increase, i.e., credit would have to expand progressively at a constant rate. But in order to bring about constant additions to capital, it would have to do more: it would have to increase at a constantly increasing rate. The rate at which this rate of increase must increase would be dependent upon the time lag between the first expenditure of the additional money on the factors of production and the re-expenditure of the income so created on consumers’ goods. . . .

But I think it can be shown . . . that . . . such a policy would . . . inevitably lead to a rapid and progressive rise in prices which, in addition to its other undesirable effects, would set up movements which would soon counteract, and finally more than offset, the “forced saving.” That it is impossible, either for a simple progressive increase of credit which only helps to maintain, and does not add to, the already existing “forced saving,” or for an increase in credit at an increasing rate, to continue for a considerable time without causing a rise in prices, results from the fact that in neither case have we reason to assume that the increase in the supply of consumers’ goods will keep pace with the increase in the flow of money coming on to the market for consumers’ goods. Insofar as, in the second case, the credit expansion leads to an ultimate increase in the output of consumers’ goods, this increase will lag considerably and increasingly (as the period of production increases) behind the increase in the demand for them. But whether the prices of consumers’ goods will rise faster or slower, all other prices, and particularly the prices of the original factors of production, will rise even faster. It is only a question of time when this general and progressive rise of prices becomes very rapid. My argument is not that such a development is inevitable once a policy of credit expansion is embarked upon, but that it has to be carried to that point if a certain result—a constant rate of forced saving, or maintenance without the help of voluntary saving of capital accumulated by forced saving—is to be achieved.

Friedman’s (1968) argument why monetary expansion could not permanently reduce unemployment below its “natural rate” closely mirrors (though he almost certainly never read) Hayek’s argument that monetary expansion could not permanently maintain a rate of investment spending above the rate of voluntary saving. Generalizing Friedman’s logic, Lucas (1976) transformed it into a critique of using econometric estimates of relationships like the Phillips Curve, the specific target of Friedman’s argument, as a basis for predicting the effects of policy changes, such estimates being conditional on implicit expectational assumptions which aren’t invariant to the policy changes derived from those estimates.

Restated differently, such econometric estimates are reduced forms that, without identifying restrictions, do not allow the estimated regression coefficients to be used to predict the effects of a policy change.

Only by specifying, and estimating, the deep structural relationships governing the response to a policy change could the effect of a potential policy change be predicted with some confidence that the prediction would not prove erroneous because of changes in the econometrically estimated relationships once agents altered their behavior in response to the policy change.

In his 1974 Nobel Lecture, Hayek offered a similar explanation of why an observed correlation between aggregate demand and employment provides no basis for predicting the effect of policies aimed at increasing aggregate demand and reducing unemployment if the likely changes in structural relationships caused by those policies are not taken into account.

[T]he very measures which the dominant “macro-economic” theory has recommended as a remedy for unemployment, namely the increase of aggregate demand, have become a cause of a very extensive misallocation of resources which is likely to make later large-scale unemployment inevitable. The continuous injection . . . money at points of the economic system where it creates a temporary demand which must cease when the increase of the quantity of money stops or slows down, together with the expectation of a continuing rise of prices, draws labour . . . into employments which can last only so long as the increase of the quantity of money continues at the same rate – or perhaps even only so long as it continues to accelerate at a given rate. What this policy has produced is not so much a level of employment that could not have been brought about in other ways, as a distribution of employment which cannot be indefinitely maintained . . . The fact is that by a mistaken theoretical view we have been led into a precarious position in which we cannot prevent substantial unemployment from re-appearing; not because . . . this unemployment is deliberately brought about as a means to combat inflation, but because it is now bound to occur as a deeply regrettable but inescapable consequence of the mistaken policies of the past as soon as inflation ceases to accelerate.

Hayek’s point that an observed correlation between the rate of inflation (a proxy for aggregate demand) and unemployment cannot be relied on in making economic policy was articulated succinctly and abstractly by Lucas as follows:

In short, one can imagine situations in which empirical Phillips curves exhibit long lags and situations in which there are no lagged effects. In either case, the “long-run” output inflation relationship as calculated or simulated in the conventional way has no bearing on the actual consequences of pursing a policy of inflation.

[T]he ability . . . to forecast consequences of a change in policy rests crucially on the assumption that the parameters describing the new policy . . . are known by agents. Over periods for which this assumption is not approximately valid . . . empirical Phillips curves will appear subject to “parameter drift,” describable over the sample period, but unpredictable for all but the very near future.

The lesson inferred by both Hayek and Lucas was that Keynesian macroeconomic models of aggregate demand, inflation and employment can’t reliably guide economic policy and should be discarded in favor of models more securely grounded in the microeconomic theories of supply and demand that emerged from the Marginal Revolution of the 1870s and eventually becoming the neoclassical economic theory that describes the characteristics of an efficient, decentralized and self-regulating economic system. This was the microeconomic basis on which Hayek and Lucas believed macroeconomic theory ought to be based instead of the Keynesian system that they were criticizing. But that superficial similarity obscures the profound methodological and substantive differences between them.

Those differences will be considered in future posts.

References

Friedman, M. 1968. “The Role of Monetary Policy.” American Economic Review 58(1):1-17.

Glasner, D. 2021. “Hayek, Deflation, Gold and Nihilism.” Ch. 16 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. [1994] 2021. “Debt, Deflation, the Gold Standard and the Great Depression.” Ch. 13 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. 2021. “Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel?” Ch. 14 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Zimmerman, P. 2021.  “The Sraffa-Hayek Debate on the Natural Rate of Interest.” Ch. 15 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Hansen, A. and Tout, H. 1933. “Annual Survey of Business Cycle Theory: Investment and Saving in Business Cycle Theory,” Econometrica 1(2): 119-47.

Hayek, F. A. [1928] 1984. “Intertemporal Price Equilibrium and Movements in the Value of Money.” In R. McCloughry (Ed.), Money, Capital and Fluctuations: Early Essays (pp. 171–215). Routledge.

Hayek, F. A. 1931a. Prices and Produciton. London: Macmillan.

Hayek, F. A. 1931b. “Reflections on the Pure Theory of Money of Mr. Keynes.” Economica 33:270-95.

Hayek, F. A. 1934. “Capital and Industrial Fluctuations.” Econometrica 2(2): 152-67.

Keynes, J. M. 1930. A Treatise on Money. 2 vols. London: Macmillan.

Lindahl. E. [1929] 1939. “The Place of Capital in the Theory of Price.” In E. Lindahl, Studies in the Theory of Money and Capital. George, Allen & Unwin.

Lucas, R. E. [1976] 1985. “Econometric Policy Evaluation: A Critique.” In R. E. Lucas, Studies in Business-Cycle Theory. Cambridge: MIT Press.

Myrdal, G. 1927. Prisbildningsproblemet och Foranderligheten (Price Formation and the Change Factor). Almqvist & Wicksell.

Summer 2008 Redux?

Nearly 14 years ago, in the summer of 2008, as a recession that started late in 2007 was rapidly deepening and unemployment rapidly rising, the Fed, mainly concerned about rising headline inflation fueled by record-breaking oil prices, kept its Fed Funds target at the 2% level set in May (slightly reduced from the 2.25% target set in March), lest inflation expectations become unanchored.

Let’s look at what happened after the Fed Funds target was reduced to 2.25% in March 2008. The price of crude oil (West Texas Intermediate) rose by nearly 50% between March and July, causing CPI inflation (year over year) between March and August to increase from 4% to 5.5%, even as unemployment rose from 5.1% in March to 5.8% in July. The PCE index, closely watched by the Fed as more indicative of underlying inflation than the CPI, showed inflation rising even faster than did the CPI.

Not only did the Fed refuse to counter rising unemployment and declining income and output by reducing its Fed Funds target, it made clear that reducing inflation was a more urgent goal than countering economic contraction and rising unemployment. An unchanged Fed Funds target while income and employment are falling, in effect, tightens monetary policy, a point underscored by the Fed as it emphasized its intent, despite the uptick in inflation caused by rising oil prices, to keep inflation expectations anchored.

The passive tightening of monetary policy associated with an unchanged Federal Funds target while income and employment were falling and the price of oil was rising led to a nearly 15% decline in the price of between mid-July and the end of August, and to a concurrent 10% increase in the dollar exchange rate against the euro, a deflationary trend also refelcted in an increase in the unemployment rate to 6.1% in August.

Evidently pleased with the deflationary impact of its passive tightening of monetary policy, the Fed viewed the falling price of oil and the appreciation of the dollar as an implicit endorsement by the markets, notwithstanding a deepening recession in a financially fragile economy, of its hard line on inflation. With major financial institutions weakened by the aftereffects of bad and sometimes fraudulent investments made in the expectation of rising home prices that then began falling, many debtors (both households and businesses) had neither sufficient cash flow nor sufficient credit to meet their debt obligations. Perhaps emboldened by the perceived market endorsement of its hard line on inflation, When the Lehman Brothers investment bank, heavily invested in subprime mortgages, was on the verge of collapse in the second week of September, the Fed, perhaps emboldened by the perceived approval of its anti-inflation hard line by the markets, refused to provide, or arrange for, emergency financing to enable Lehman to meet obligations coming due, triggering a financial panic stoked by fears that other institutions were at risk, causing an almost immediate freeze up of credit facilities in financial centers in the US and around the world. The rest is history.

Why bring up this history now? I do so, because I see troubling parallels between what happened in 2008 and what is happening now, parallels that make me concerned that a too narrow focus on preventing inflation expectations from being unanchored could lead to unpleasant and unnecessary consequences.

First, in 2008, the WTI price of oil rose by nearly 50% between March and July, while in 2021-22 the WTI oil price rose by over 75% between December 2021 and April 2022. Both episodes of rising oil prices clearly depressed real GDP growth. Second, in both 2008 and 2021-22, the rising oil price caused actual, and, very likely, expected rates of inflation to rise. Third, in 2008, the dollar appreciated from $1.59/euro on July 15 to $1.39/euro on September 12, while, in 2022, the dollar has appreciated from $1.14/euro on February 11 to $1.05/euro on April 29.

In 2008, an inflationary burst, fed in part by rapidly rising oil prices, led to a passive tightening of monetary policy, manifested in dollar appreciation in forex markets, plunging an economy, burdened with a fragile financial system carrying overvalued assets, and already in recession, into a financial crisis. This time, even steeper increases in oil prices, having fueled an initial burst of inflation during the recovery from a pandemic/supply-side recession, were later reinforced by further negative supply shocks stemming from Russia’s invasion of Ukraine. The complex effects of both negative supply-shocks and excess aggregate demand have caused monetary policy to shift from ease to restraint, once again manifested in dollar appreciation in foreign-exchange markets.

In September 2008, the Fed, focused narrowly on inflation, was oblivious to the looming financial crisis as deflationary forces, amplified by the passive monetary tightening of the preceding two months, were gathering. This time, although monetary tightening to reign in excess aggregate demand is undoubtedly appropriate, signs of ebbing inflationary pressure are multiplying, and many forecasters are predicting that inflation will subside to 4% or less by year’s end. Modest further tightening to reduce aggregate demand to a level consistent with a 2% inflation rate might be appropriate, but the watchword for policymakers now should be caution.

While there is little reason to think that the US economy and financial system are now in as precarious a state as they were in the summer of 2008, a decision to raise the target Fed Funds rate by more than 50 basis points as a demonstration of the Fed’s resolve to hold the line on inflation would certainly be ill-advised, and an increase of more than 25 basis points would now be imprudent.

The preliminary report on first-quarter 2022 GDP, presented a mixed picture of the economy. A small drop in real GDP seems like an artefact of technical factors, and an upward revision seems likely with no evidence yet of declining employment or slack in the labor market. While noiminal GDP growth declined substantially in the first quarter from the double-digit growth rate in 2021, it is above the rate consistent with the 2% inflation rate that remains the Fed’s policy target. However, given the continuing risks of further negative supply-side shocks while the war in Ukraine continues, the Fed should not allow the nominal growth rate of GDP to fall below the 5% rate that ought to remain the short-term target under current conditions.

If the Fed is committed to a policy target of 2% average inflation over a suitably long time horizon, the rate of nominal GDP growth need not fall below 5% before normal peacetime economic conditions have been restored. Until a return to normalcy, avoiding the risk of reducing nominal GDP growth below a 5% rate should have priority over quickly reducing inflation to the targeted long-run average rate. To do otherwise would increase the risk that inadvertent policy mistakes in an uncertain economic environment might cause sufficient financial distress to tip the economy into recession and even another financial crisis. Better safe than sorry.

Why I’m not Apologizing for Calling Recent Inflation Transitory

I’ve written three recent blogposts explaining why the inflation that began accelerating in the second half of 2021 was likely to be transitory (High Inflation Anxiety, Sic Transit Inflatio del Mundi, and Wherein I Try to Calm Professor Blanchard’s Nerves). I didn’t deny that inflation was accelerating and likely required a policy adjustment, but I also didn’t accept that the inflation threat was (or is) as urgent as some, notably Larry Summers, were suggesting.

In my two posts in late 2021, I argued that Summers’s concerns were overblown, because the burst of inflation in the second half of 2021 was caused mainly by increased consumer spending as consumers began drawing down cash and liquid assets accumulated when spending outlets had been unavailable, and was exacerbated by supply bottlenecks that kept output from accommodating increased consumer demand. Beyond that, despite rising expectations at the short-end, I minimized concerns about the unanchoring of inflation expectations owing to the inflationary burst in the second half of 2021, in the absence of any signs of rising inflation expectations in longer-term (5 years or more) bond prices.

Aside from criticizing excessive concern with what I viewed as a transitory burst of inflation not entirely caused by expansive monetary policy, I cautioned against reacting to inflation caused by negative supply shocks. In contrast to Summers’s warnings about the lessons of the 1970s when high inflation became entrenched before finally being broken — at the cost of the worst recession since the Great Depression, by Volcker’s anti-inflation policy — I explained that much of 1970s inflation was caused by supply-side oil shocks, which triggered an unnecessarily severe monetary tightening in 1974-75 and a deep recession that only modestly reduced inflation. Most of the decline in inflation following the oil shock occurred during the 1976 expansion when inflation fell to 5%. But, rather than allow a strong recovery to proceed on its own, the incoming Carter Administration and a compliant Fed, attempting to accelerate the restoration of full employment, increased monetary expansion. (It’s noteworthy that much of the high unemployment at the time reflected the entry of baby-boomers and women into the labor force, one of the few occasions in which an increased natural rate of unemployment can be easily identified.)

The 1977-79 monetary expansion caused inflation to accelerate to the high single digits even before the oil-shocks of 1979-80 led to double-digit inflation, setting the stage for Volcker’s brutal disinflationary campaign in 1981-82. But the mistake of tightening of monetary policy to suppress inflation resulting from negative supply shocks (usually associated with rising oil prices) went unacknowledged, the only lesson being learned, albeit mistakenly, was that high inflation can be reduced only by a monetary tightening sufficient to cause a deep recession.

Because of that mistaken lesson, the Fed, focused solely on the danger of unanchored inflation expectations, resisted pleas in the summer of 2008 to ease monetary policy as the economy was contracting and unemployment rising rapidly until October, a month after the start of the financial crisis. That disastrous misjudgment made me doubt that the arguments of Larry Summers et al. that tight money is required to counter inflation and prevent the unanchoring of inflation expectations, recent inflation being largely attributable, like the inflation blip in 2008, to negative supply shocks, with little evidence that inflation expectations had, or were likely to, become unanchored.

My first two responses to inflation hawks occurred before release of the fourth quarter 2021 GDP report. In the first three quarters, nominal GDP grew by 10.9%, 13.4% and 8.4%. My hope was that the Q4 rate of increase in nominal GDP would show a further decline from the Q3 rate, or at least show no increase. The rising trend of inflation in the final months of 2021, with no evidence of a slowdown in economic activity, made it unlikely that nominal GDP growth in Q4 had not accelerated. In the event, the acceleration of nominal GDP growth to 14.5% in Q4 showed that a tightening of monetary policy had become necessary.

Although a tightening of policy was clearly required to reduce the rate of nominal GDP growth, there was still reason for optimism that the negative supply-side shocks that had amplified inflationary pressure would recede, thereby allowing nominal GDP growth to slow down with no contraction in output and employment. Unfortunately, the economic environment deteriorated drastically in the latter part of 2021 as Russia began the buildup to its invasion of Ukraine, and deteriorated even more once the invasion started.

The price of Brent crude, just over $50/barrel in January 2021, rose to over $80/barrel in November of 2021. Tensions between Russia and Ukraine rose steadily during 2021, so it is not easy to determine the extent to which those increasing tensions were causing oil prices to rise and to what extent they rose because of increasing economic activity and inflationary pressure on oil prices. Brent crude fell to $70 in December before rising to $100/barrel in February on the eve of the invasion, briefly reaching $130/barrel shortly thereafter, before falling back to $100/barrel. Aside from the effect on energy prices, generalized uncertainty and potential effects on wheat prices and the federal budget from a drawn-out conflict in Ukraine have caused inflation expectations to increase.

Under these circumstances, it makes little sense to tighten policy suddenly. The appropriate policy strategy is to lean toward restraint and announce that the aim of policy is to reduce the rate of GDP growth gradually until a sustainable 4-5% rate of nominal GDP growth consistent with an inflation rate of about 2-3% a year is reached. The overnight rate of interest being the primary instrument whereby the Fed can either increase or decrease the rate of nominal GDP growth, it is unnecessary, and probably unwise, for the Fed to announce in advance a path of interest-rate increases. Instead, the Fed should communicate its target range for nominal GDP growth and condition the size and frequency of future rate increases on the deviations of the economy from that targeted growth path of nominal GDP.

Previous monetary policy mistakes that caused either recessions or excessive inflation have for more than half a century resulted from using interest rates or some other policy instrument to control inflation or unemployment rather than to moderate deviations from a stable growth rate in nominal GDP. Attempts to reduce inflation by maintaining or increasing already high interest rates until inflation actually fell needlessly and perversely prolonged and deepened recessions. Monetary conditions ought be eased as soon as nominal GDP growth falls below the target range for nominal GDP growth. Inflation automatically tends to fall in the early stages of recovery from a recession, and nothing is gained, and much harm is done, by maintaining a tight-money policy after nominal GDP growth has fallen below the target range. That’s the great, and still unlearned, lesson of monetary policy.

On the Labor Supply Function

The bread and butter of economics is demand and supply. The basic idea of a demand function (or a demand curve) is to describe a relationship between the price at which a given product, commodity or service can be bought and the quantity that will bought by some individual. The standard assumption is that the quantity demanded increases as the price falls, so that the demand curve is downward-sloping, but not much more can be said about the shape of a demand curve unless special assumptions are made about the individual’s preferences.

Demand curves aren’t natural phenomena with concrete existence; they are hypothetical or notional constructs pertaining to individual preferences. To pass from individual demands to a market demand for a product, commodity or service requires another conceptual process summing the quantities demanded by each individual at any given price. The conceptual process is never actually performed, so the downward-sloping market demand curve is just presumed, not observed as a fact of nature.

The summation process required to pass from individual demands to a market demand implies that the quantity demanded at any price is the quantity demanded when each individual pays exactly the same price that every other demander pays. At a price of $10/widget, the widget demand curve tells us how many widgets would be purchased if every purchaser in the market can buy as much as desired at $10/widget. If some customers can buy at $10/widget while others have to pay $20/widget or some can’t buy any widgets at any price, then the quantity of widgets actually bought will not equal the quantity on the hypothetical widget demand curve corresponding to $10/widget.

Similar reasoning underlies the supply function or supply curve for any product, commodity or service. The market supply curve is built up from the preferences and costs of individuals and firms and represents the amount of a product, commodity or service that would be willing to offer for sale at different prices. The market supply curve is the result of a conceptual summation process that adds up the amounts that would be hypothetically be offered for sale by every agent at different prices.

The point of this pedantry is to emphasize the that the demand and supply curves we use are drawn on the assumption that a single uniform market price prevails in every market and that all demanders and suppliers can trade without limit at those prices and their trading plans are fully executed. This is the equilibrium paradigm underlying the supply-demand analysis of econ 101.

Economists quite unself-consciously deploy supply-demand concepts to analyze labor markets in a variety of settings. Sometimes, if the labor market under analysis is limited to a particular trade or a particular skill or a particular geographic area, the supply-demand framework is reasonable and appropriate. But when applied to the aggregate labor market of the whole economy, the supply-demand framework is inappropriate, because the ceteris-paribus proviso (all prices other than the price of the product, commodity or service in question are held constant) attached to every supply-demand model is obviously violated.

Thoughtlessly applying a simple supply-demand model to analyze the labor market of an entire economy leads to the conclusion that widespread unemployment, when some workers are unemployed, but would have accepted employment offers at wages that comparably skilled workers are actually receiving, implies that wages are above the market-clearing wage level consistent with full employment.

The attached diagram for simplest version of this analysis. The market wage (W1) is higher than the equilibrium wage (We) at which all workers willing to accept that wage could be employed. The difference between the number of workers seeking employment at the market wage (LS) and the number of workers that employers seek to hire (LD) measures the amount of unemployment. According to this analysis, unemployment would be eliminated if the market wage fell from W1 to We.

Applying supply-demand analysis to aggregate unemployment fails on two levels. First, workers clearly are unable to execute their plans to offer their labor services at the wage at which other workers are employed, so individual workers are off their supply curves. Second, it is impossible to assume, supply-demand analysis requires, that all other prices and incomes remain constant so that the demand and supply curves do not move as wages and employment change. When multiple variables are mutually interdependent and simultaneously determined, the analysis of just two variables (wages and employment) cannot be isolated from the rest of the system. Focusing on the wage as the variable that needs to change to restore full employment is an example of the tunnel vision.

Keynes rejected the idea that economy-wide unemployment could be eliminated by cutting wages. Although Keynes’s argument against wage cuts as a cure for unemployment was flawed, he did have at least an intuitive grasp of the basic weakness in the argument for wage cuts: that high aggregate unemployment is not usefully analyzed as a symptom of excessive wages. To explain why wage cuts aren’t the cure for high unemployment, Keynes introduced a distinction between voluntary and involuntary unemployment.

Forty years later, Robert Lucas began his effort — not the first such effort, but by far the most successful — to discredit the concept of involuntary unemployment. Here’s an early example:

Keynes [hypothesized] that measured unemployment can be decomposed into two distinct components: ‘voluntary’ (or frictional) and ‘involuntary’, with full employment then identified as the level prevailing when involuntary employment equals zero. It seems appropriate, then, to begin by reviewing Keynes’ reasons for introducing this distinction in the first place. . . .

Accepting the necessity of a distinction between explanations for normal and cyclical unemployment does not, however, compel one to identify the first as voluntary and the second as involuntary, as Keynes goes on to do. This terminology suggests that the key to the distinction lies in some difference in the way two different types of unemployment are perceived by workers. Now in the first place, the distinction we are after concerns sources of unemployment, not differentiated types. . . .[O]ne may classify motives for holding money without imagining that anyone can subdivide his own cash holdings into “transactions balances,” “precautionary balances”, and so forth. The recognition that one needs to distinguish among sources of unemployment does not in any way imply that one needs to distinguish among types.

Nor is there any evident reason why one would want to draw this distinction. Certainly the more one thinks about the decision problem facing individual workers and firms the less sense this distinction makes. The worker who loses a good job in prosperous time does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesirable capital loss. Nevertheless, the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

Lucas, Studies in Business Cycle Theory, pp. 241-43

Consider this revision of Lucas’s argument:

The expressway driver who is slowed down in a traffic jam does not volunteer to be in this situation; he has suffered a waste of his time. Nevertheless, the driver can get off the expressway at the next exit to find an alternate route. Thus, there is an involuntary element in every traffic jam, in the sense that no one chooses to waste time; there is also a voluntary element in all traffic jams, in the sense that however stuck one is in traffic, one can always take the next exit on the expressway.

What is lost on Lucas is that, for an individual worker, taking a wage cut to avoid being laid off by the employer accomplishes nothing, because the willingness of a single worker to accept a wage cut would not induce the employer to increase output and employment. Unless all workers agreed to take wage cuts, a wage cut to one employee would have not cause the employer to reconsider its plan to reduce in the face of declining demand for its product. Only the collective offer of all workers to accept a wage cut would induce an output response by the employer and a decision not to lay off part of its work force.

But even a collective offer by all workers to accept a wage cut would be unlikely to avoid an output reduction and layoffs. Consider a simple case in which the demand for the employer’s output declines by a third. Suppose the employer’s marginal cost of output is half the selling price (implying a demand elasticity of -2). Assume that demand is linear. With no change in its marginal cost, the firm would reduce output by a third, presumably laying off up to a third of its employees. Could workers avoid the layoffs by accepting lower wages to enable the firm to reduce its price? Or asked in another way, how much would marginal cost have to fall for the firm not to reduce output after the demand reduction?

Working out the algebra, one finds that for the firm to keep producing as much after a one-third reduction in demand, the firm’s marginal cost would have to fall by two-thirds, a decline that could only be achieved by a radical reduction in labor costs. This is surely an oversimplified view of the alternatives available to workers and employers, but the point is that workers facing a layoff after the demand for the product they produce have almost no ability to remain employed even by collectively accepting a wage cut.

That conclusion applies a fortiori when decisions whether to accept a wage cut are left to individual workers, because the willingness of workers individually to accept a wage cut is irrelevant to their chances of retaining their jobs. Being laid off because of decline in the demand for the product a worker is producing is a much situation from being laid off, because a worker’s employer is shifting to a new technology for which the workers lack the requisite skills, and can remain employed only by accepting re-assignment to a lower-paying job.

Let’s follow Lucas a bit further:

Keynes, in chapter 2, deals with the situation facing an individual unemployed worker by evasion and wordplay only. Sentences like “more labor would, as a rule, be forthcoming at the existing money wage if it were demanded” are used again and again as though, from the point of view of a jobless worker, it is unambiguous what is meant by “the existing money wage.” Unless we define an individual’s wage rate as the price someone else is willing to pay him for his labor (in which case Keynes’s assertion is defined to be false to be false), what is it?

Lucas, Id.

I must admit that, reading this passage again perhaps 30 or more years after my first reading, I’m astonished that I could have once read it without astonishment. Lucas gives the game away by accusing Keynes of engaging in evasion and wordplay before embarking himself on sustained evasion and wordplay. The meaning of the “existing money wage” is hardly ambiguous, it is the money wage the unemployed worker was receiving before losing his job and the wage that his fellow workers, who remain employed, continue to receive.

Is Lucas suggesting that the reason that the worker lost his job while his fellow workers who did not lose theirs is that the value of his marginal product fell but the value of his co-workers’ marginal product did not? Perhaps, but that would only add to my astonishment. At the current wage, employers had to reduce the number of workers until their marginal product was high enough for the employer to continue employing them. That was not necessarily, and certainly not primarily, because some workers were more capable than those that were laid off.

The fact is, I think, that Keynes wanted to get labor markets out of the way in chapter 2 so that he could get on to the demand theory which really interested him.

More wordplay. Is it fact or opinion? Well, he says that thinks it’s a fact. In other words, it’s really an opinion.

This is surely understandable, but what is the excuse for letting his carelessly drawn distinction between voluntary and involuntary unemployment dominate aggregative thinking on labor markets for the forty years following?

Mr. Keynes, really, what is your excuse for being such an awful human being?

[I]nvoluntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to ‘explain’ the theoretical constructs of our predecessor, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Lucas, Id.

Let’s rewrite this paragraph with a few strategic word substitutions:

Heliocentrism is not a fact or phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Copernicus introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon the observed movement of the planets in the heavens. Is it the task of modern theoretical physics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Copernicus died in 1542 shortly before his work on heliocentrism was published. Galileo’s works on heliocentrism were not published until 1610 almost 70 years after Copernicus published his work. So, under Lucas’s forty-year time limit, Galileo had no business trying to explain Copernican heliocentrism which had still not yet proven fruitful. Moreover, even after Galileo had published his works, geocentric models were providing predictions of planetary motion as good as, if not better than, the heliocentric models, so decisive empirical evidence in favor of heliocentrism was still lacking. Not until Newton published his great work 70 years after Galileo, and 140 years after Copernicus, was heliocentrism finally accepted as fact.

In summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problem they face. One cannot, even conceptually, arrive at a usable definition of full employment

Lucas, Id.

Belying his claim to be introducing scientific rigor into macroeocnomics, Lucas restorts to an extended scholastic inquiry into whether an unemployed worker can really ever be unemployed involuntarily. Based on his scholastic inquiry into the nature of volunatriness, Lucas declares that Keynes was mistaken because would not accept the discipline of optimization and equilibrium. But Lucas’s insistence on the discipline of optimization and equilibrium is misplaced unless he can provide an actual mechanism whereby the notional optimization of a single agent can be reconciled with notional optimization of other individuals.

It was his inability to provide any explanation of the mechanism whereby the notional optimization of individual agents can be reconciled with the notional optimizations of other individual agents that led Lucas to resort to rational expectations to circumvent the need for such a mechanism. He successfully persuaded the economics profession that evading the need to explain such a reconciliation mechanism, the profession would not be shirking their explanatory duty, but would merely be fulfilling their methodological obligation to uphold the neoclassical axioms of rationality and optimization neatly subsumed under the heading of microfoundations.

Rational expectations and microfoundations provided the pretext that could justify or at least excuse the absence of any explanation of how an equilibrium is reached and maintained by assuming that the rational expectations assumption is an adequate substitute for the Walrasian auctioneer, so that each and every agent, using the common knowledge (and only the common knowledge) available to all agents, would reliably anticipate the equilibrium price vector prevailing throughout their infinite lives, thereby guaranteeing continuous equilibrium and consistency of all optimal plans. That feat having been securely accomplished, it was but a small and convenient step to collapse the multitude of individual agents into a single representative agent, so that the virtue of submitting to the discipline of optimization could find its just and fitting reward.

Three Propagation Mechanisms in Lucas and Sargent with a Response from Brad DeLong

UPDATE (4/3/2022): Reupping this post with the response to my query sent by Brad DeLong.

I’m writing this post in hopes of eliciting some guidance from readers about the three propagation mechanisms to which Robert Lucas and Thomas Sargent refer in their famous 1978 article, “After Keynesian Macroeconomics.” The three propagation mechanisms were mentioned to parry criticisms of the rational-expectations principle underlying the New Classical macroeconomics that Lucas and Sargent were then developing as an alternative to Keynesian macroeconomics. I am wondering how subsequent research has dealt with these propagation mechanisms and how they are now treated in current macro-theory. Here is the relevant passage from Lucas and Sargent:

A second line of criticism stems from the correct observation that if agents’ expectations are rational and if their information sets include lagged values of the variable being forecast, then agents’ forecast errors must be a serially uncorrelated random process. That is, on average there must be no detectable relationships between a period’s forecast error and any previous period’s. This feature has led several critics to conclude that equilibrium models cannot account for more than an insignificant part of the highly serially correlated movements we observe in real output, employment, unemployment, and other series. Tobin (1977, p. 461) has put the argument succinctly:

One currently popular explanation of variations in employment is temporary confusion of relative and absolute prices. Employers and workers are fooled into too many jobs by unexpected inflation, but only until they learn it affects other prices, not just the prices of what they sell. The reverse happens temporarily when inflation falls short of expectation. This model can scarcely explain more than transient disequilibrium in labor markets.

So how can the faithful explain the slow cycles of unemployment we actually observe? Only by arguing that the natural rate itself fluctuates, that variations in unemployment rates are substantially changes in voluntary, frictional, or structural unemployment rather than in involuntary joblessness due to generally deficient demand.

The critics typically conclude that the theory only attributes a very minor role to aggregate demand fluctuations and necessarily depends on disturbances to aggregate supply to account for most of the fluctuations in real output over the business cycle. “In other words,” as Modigliani (1977) has said, “what happened to the United States in the 1930’s was a severe attack of contagious laziness.” This criticism is fallacious because it fails to distinguish properly between sources of impulses and propagation mechanisms, a distinction stressed by Ragnar Frisch in a classic 1933 paper that provided many of the technical foundations for Keynesian macroeconometric models. Even though the new classical theory implies that the forecast errors which are the aggregate demand impulses are serially uncorrelated, it is certainly logically possible that propagation mechanisms are at work that convert these impulses into serially correlated movements in real variables like output and employment. Indeed, detailed theoretical work has already shown that two concrete propagation mechanisms do precisely that.

One mechanism stems from the presence of costs to firms of adjusting their stocks of capital and labor rapidly. The presence of these costs is known to make it optimal for firms to spread out over time their response to the relative price signals they receive. That is, such a mechanism causes a firm to convert the serially uncorrelated forecast errors in predicting relative prices into serially correlated movements in factor demands and output.

A second propagation mechanism is already present in the most classical of economic growth models. Households’ optimal accumulation plans for claims on physical capital and other assets convert serially uncorrelated impulses into serially correlated demands for the accumulation of real assets. This happens because agents typically want to divide any unexpected changes in income partly between consuming and accumulating assets. Thus, the demand for assets next period depends on initial stocks and on unexpected changes in the prices or income facing agents. This dependence makes serially uncorrelated surprises lead to serially correlated movements in demands for physical assets. Lucas (1975) showed how this propagation mechanism readily accepts errors in forecasting aggregate demand as an impulse source.

A third likely propagation mechanism has been identified by recent work in search theory. (See, for example, McCall 1965, Mortensen 1970, and Lucas and Prescott 1974.) Search theory tries to explain why workers who for some reason are without jobs find it rational not necessarily to take the first job offer that comes along but instead to remain unemployed for awhile until a better offer materializes. Similarly, the theory explains why a firm may find it optimal to wait until a more suitable job applicant appears so that vacancies persist for some time. Mainly for technical reasons, consistent theoretical models that permit this propagation mechanism to accept errors in forecasting aggregate demand as an impulse have not yet been worked out, but the mechanism seems likely eventually to play an important role in a successful model of the time series behavior of the unemployment rate. In models where agents have imperfect information, either of the first two mechanisms and probably the third can make serially correlated movements in real variables stem from the introduction of a serially uncorrelated sequence of forecasting errors. Thus theoretical and econometric models have been constructed in which in principle the serially uncorrelated process of forecasting errors can account for any proportion between zero and one of the steady state variance of real output or employment. The argument that such models must necessarily attribute most of the variance in real output and employment to variations in aggregate supply is simply wrong logically.

My problem with the Lucas-Sargent argument is that even if the deviations from a long-run equilibrium path are serially correlated, shouldn’t those deviations be diminishing over time after the initial disturbance. Can these propagation mechanisms account for amplification of the initial disturbance before the adjustment toward the equilibrium path begins? I would gratefully welcome any responses.

David Glasner has a question about the “rational expectations” business-cycle theories developed in the 1970s:

David GlasnerThree Propagation Mechanisms in Lucas & Sargent: ‘I’m… hop[ing for]… some guidance… about… propagation mechanisms… [in] Robert Lucas and Thomas Sargent[‘s]… “After Keynesian Macroeconomics.”… 

The critics typically conclude that the theory only attributes a very minor role to aggregate demand fluctuations and necessarily depends on disturbances to aggregate supply…. [But] even though the new classical theory implies that the forecast errors which are the aggregate demand impulses are serially uncorrelated, it is certainly logically possible that propagation mechanisms are at work that convert these impulses into serially correlated movements in real variables like output and employment… the presence of costs to firms of adjusting their stocks of capital and labor rapidly…. accumulation plans for claims on physical capital and other assets convert serially uncorrelated impulses into serially correlated demands for the accumulation of real assets… workers who for some reason are without jobs find it rational not necessarily to take the first job offer that comes along but instead to remain unemployed for awhile until a better offer materializes…. In principle the serially uncorrelated process of forecasting errors can account for any proportion between zero and one of the [serially correlated] steady state variance of real output or employment. The argument that such models must necessarily attribute most of the variance in real output and employment to variations in aggregate supply is simply wrong logically…

My problem with the Lucas-Sargent argument is that even if the deviations from a long-run equilibrium path are serially correlated, shouldn’t those deviations be diminishing over time after the initial disturbance? Can these propagation mechanisms account for amplification of the initial disturbance before the adjustment toward the equilibrium path begins? I would gratefully welcome any responses…

In some ways this is of only history-of-thought interest. For Lucas and Prescott, at least, had within five years of the writing of “After Keynesian Macroeconomics” decided that the critics were right: that their models of how mistaken decisions driven by serially-uncorrelated forecast errors could not account for the bulk of the serially correlated business-cycle variance of real output and employment, and they needed to shift to studying real business cycle theory instead of price-misperceptions theory. The first problem was that time-series methods generated shocks that came at the wrong times to explain recessions. The second problem was that the propagation mechanisms did not amplify but rather damped the shock: at best they produced some kind of partial-adjustment process that extended the impact of a shock on real variables to N periods and diminished its impact in any single period to 1/N. There was no… what is the word?…. multiplier in the system.

It was stunning to watch in real time in the early 1980s. As Paul Volcker hit the economy on the head with the monetary-stringency brick, repeatedly, quarter after quarter; as his serially correlated and hence easily anticipated policy moves had large and highly serially correlated effects on output; Robert Lucas and company simply… pretended it was not happening: that monetary policy was not having major effects on output and employment in the first half of the 1980s, and that it was not the case thjat the monetary policies that were having such profound real impacts had no plausible interpretation as “surprises” leading to “misperceptions”. Meanwhile, over in the other corner, Robert Barro was claiming that he saw no break in the standard pattern of federal deficits from the Reagan administration’s combination of tax cuts plus defense buildup.

Those of us who were graduate students at the time watched this, and drew conclusions about the likelihood that Lucas, Prescott, and company had good enough judgment and close enough contact with reality that their proposed “real business cycle” research program would be a productive one—conclusions that, I think, time has proved fully correct.

Behind all this, of course, was this issue: the “microfoundations” of the Lucas “island economy” model were totally stupid: people are supposed to “misperceive” relative prices because they know the nominal prices at which they sell but do not know the nominal prices at which they buy, hence people confuse a monetary shock-generated rise in the nominal price level with an increase in the real price of what they produce, and hence work harder and longer and produce more? (I forget who it was who said at the time that the model seemed to require a family in which the husband worked and the wife went to the grocery store and the husband never listened to anything the wife said.) These so-called “microfoundations” could only be rationally understood as some kind of metaphor. But what kind of metaphor? And why should it have any special status, and claim on our attention?

Paul Krugman’s judgment on the consequences of this intellectual turn is even harsher than mine:

What made the Dark Ages dark was the fact that so much knowledge had been lost, that so much known to the Greeks and Romans had been forgotten by the barbarian kingdoms that followed. And that’s what seems to have happened to macroeconomics in much of the economics profession. The knowledge that S=I doesn’t imply the Treasury view—the general understanding that macroeconomics is more than supply and demand plus the quantity equation — somehow got lost in much of the profession. I’m tempted to go on and say something about being overrun by barbarians in the grip of an obscurantist faith…

I would merely say that it has left us, over what is now two generations, with a turn to DSGE models—Dynamic Stochastic General Equilibrium—that must satisfy a set of formal rhetorical requirements that really do not help us fit the data, and that it gave many, many people an excuse not to read and hence a license to remain ignorant of James Tobin.

Brad

====

Preorder Slouching Towards Utopia: An Economic History of the Long 20th Century, 1870-2010

About: <https://braddelong.substack.com/about>

Hayek Refutes Banana Republican Followers of Scalia Declaring War on Unenumerated Rights

Though overshadowed by the towering obnoxiousness of their questioning of Judge Katanji Brown Jackson in her confirmation hearings last week, the Banana Republicans on the Senate Judiciary Committee signaled that their goals for remaking American Constitutional Jurisprudence extend far beyond overturning the Roe v. Wade; they will be satisfied with nothing less than the evisceration of all unenumerated Constitutional rights that the Courts have found over the past two centuries. The idea that rights exist only insofar as they are explicitly recognized and granted by written legislative or Constitutional enactment, as understood at the moment of enactment, is the bedrock on which Justice Scalia founded his jurisprudential doctrine.

The idea was clearly rejected by the signatories of the Declaration of Independence, which in its second sentence declared:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.

Clearly the Declaration believed that individual rights exist independently of any legislative or Constitutional enactment. Moreover the three rights listed by the Declaration: rights to life, liberty and the pursuit of happiness are not exhaustive, but are only among a longer list of unenumerated rights endowed to individuals by their Creator. Rejecting the idea, of natural or moral rights to which individuals are entitled by virtue of their humanity, Scalia adopted the positivist position that all law is an expression of the will of the sovereign, which, in the United States, is in some abstract sense “the people” as expressed through the Constitution (including its Amendments), and through legislation by Congress and state legislatures.

Treating Scalia’s doctrine as controlling, the Banana Republicans regard all judicial decisions that invalidate legislative enactments based on the existence of individual rights not explicitly enumerated in the Constitution as fundamentally illegitimate and worthy of being overruled by suitably right-thinking judges.

Not only is Scalia’s doctrine fundamentally at odds with the Declaration of Independence, which has limited legal force, it is directly contradicted by the Ninth Amendment to the Constitution which states:

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

So, the Ninth Amendment explicitly negates the Scalian doctrine that the only rights to which individuals have a legal claim are those explicitly enumerated by the Constitution. Scalia’s jurisprudential predecessor, Robert Bork, whose originalist philosophy Scalia revised and restated in a more palatable form, dismissed the Ninth Amendment as unintelligible, and, therefore, essentially a nullity. Scalia, himself, was unwilling to call it unintelligible, but came up with the following, hardly less incoherent, rationale, reeking of bad faith, for relegating the Ninth Amendment to the ash heap of history:

He should apply the Ninth Amendment as it is written. And I apply it rigorously; I do not deny or disparage the existence of other rights in the sense of natural rights. That’s what the framers meant by that. Just because we’ve listed some rights of the people here doesn’t mean that we don’t believe that people have other rights. And if you try to take them away, we will revolt. And a revolt will be justified. It was the framers’ expression of their belief in natural law. But they did not put it in the charge of the courts to enforce.

https://lareviewofbooks.org/article/reading-the-text-an-interview-with-justice-antonin-scalia-of-the-u-s-supreme-court/

If Scalia had been honest, he would have said “He cannot apply the Ninth Amendment as it is written. And I rigorously do not apply it.” I mean what could Scalia, or any judge in thrall to Scalian jurisprudence, possibly do with the Ninth Amendment after saying: “But [the framers] did not put [the Ninth Amendment] in the charge of the courts to enforce”? After all, according to the estimable [sarcasm alert] Mr. Justice Scalia, the Ninth Amendment was added to the Constitution to grant the citizenry — presumably exercising their Second Amendment rights and implementing Second Amendment remedies — a right to overthrow the government that the framers were, at that very moment, ordaining and establishing.

In The Constitution of Liberty, F. A. Hayek provided an extended analysis of the U. S. Constitution and why a Bill of Rights was added as a condition of its ratification in 1788. His discussion of the Ninth Amendment demolishes Scalia’s nullification of the Ninth Amendment. Here is an extended quotation:

Hayek The Constitution of Liberty, pp. 185-86

Eight Recurring Ideas in My Studies in the History of Monetary Theory

In the introductory chapter of my book Studies in the History of Monetary Theory: Controversies and Clarifications, I list eight main ideas to which I often come back in the sixteen subsequent chapters. Here they are:

  1. The standard neoclassical models of economics textbooks typically assume full information and perfect competition. But these assumptions are, or ought to be, just the starting point, not the end, of analysis. Recognizing when and why these assumptions need to be relaxed and what empirical implications follow from relaxing those assumptions is how economists gain practical insight into, and understanding of, complex economic phenomena.
  2. Since the late eighteenth or early nineteenth century, much, if not most, of the financial instruments actually used as media of exchange (money) have been produced by private financial institutions (usually commercial banks); the amount of money that is privately produced is governed by the revenue generated and the cost incurred by creating money.
  3. The standard textbook model of international monetary adjustment under the gold standard (or any fixed-exchange rate system), the price-specie-flow mechanism, introduced by David Hume mischaracterized the adjustment mechanism by overlooking that the prices of tradable goods in any country are constrained by the prices of those tradable goods in other countries. That arbitrage constraint on the prices of tradable goods in any country prevents price levels in different currency areas from deviating, regardless of local changes in the quantity of money, from a common international level.
  4. The Great Depression was caused by a rapid appreciation of gold resulting from the increasing monetary demand for gold occasioned by the restoration of the international gold standard in the 1920s after the demonetization of gold in World War I.
  5. If the expected rate of deflation exceeds the real rate of interest, real-asset prices crash and economies collapse.
  6. The primary concern of macroeconomics as a field of economics is to explain systemic failures of coordination that lead to significant lapses from full employment.
  7. Lapses from full employment result from substantial and widespread disappointment of agents’ expectations of future prices.
  8. The only – or at least the best — systematic analytical approach to the study of such lapses is the temporary-equilibrium approach introduced by Hicks in Value and Capital.

Here is a list of the chapter titles

1. Introduction

Part One: Classical Monetary Theory

2. A Reinterpretation of Classical Monetary Theory

3. On Some Classical Monetary Controversies

4. The Real Bills Doctrine in the Light of the Law of Reflux

5. Classical Monetary Theory and the Quantity Theory

6. Monetary Disequilibrium and the Demand for Money in Ricardo and Thornton

7. The Humean and Smithian Traditions in Monetary Theory

8. Rules versus Discretion in Monetary Policy Historically Contemplated

9. Say’s Law and the Classical Theory of Depressions

Part Two: Hawtrey, Keynes, and Hayek

10. Hawtrey’s Good and Bad Trade: A Centenary Retrospective

11. Hawtrey and Keynes

12. Where Keynes Went Wrong

13. Debt, Deflation, the Gold Standard and the Great Depression

14. Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel? (with Ronald Batchelder)

15. The Sraffa-Hayek Debate on the Natural Rate of Interest (with Paul Zimmerman)

16. Hayek, Deflation, Gold and Nihilism

17. Hayek, Hicks, Radner and Four Equilibrium Concepts: Intertemporal, Sequential, Temporary and Rational Expectations

Wherein I Try to Calm Professor Blanchard’s Nerves

Olivier Blanchard is rightly counted among the most eminent macroeconomists of our time, and his pronouncements on macroeconomic matters should not be dismissed casually. So his commentary yesterday for the Peterson Institute of International Economics, responding to a previous policy brief, by David Reifschneider and David Wilcox, arguing that the recent burst of inflation is likely to recede, bears close attention.

Blanchard does not reject the analysis of Reifschneider and Wilcox outright, but he argues that they overlook factors that could cause inflation to remain high unless policy makers take more aggressive action to bring inflation down than is recommended by Reifschneider and Wilcox. Rather than go through the details of Blanchard’s argument, I address the two primary concerns he identifies: (1) the potential for inflation expectations to become unanchored, as they were in the 1970s and early 1980s, by persistent high inflation, and (2) the potential inflationary implications of wage catchup after the erosion of real wages by the recent burst of inflation.

Unanchored Inflation Expectations and the Added Cost of a Delayed Response to Inflation

Blanchard cites a forthcoming book by Alan Blinder on soft and hard landings from inflation in which Blinder examines nine Fed tightening episodes in which tightening was the primary cause of a slowdown or a recession. Based on the historical record, Blinder is optimistic that the Fed can manage a soft landing if it needs to reduce inflation. Blanchard doesn’t share Blinder’s confidence.

[I]n most of the episodes Blinder has identified, the movements in inflation to which the Fed reacted were too small to be of direct relevance to the current situation, and the only comparable episode to today, if any, is the episode that ended with the Volcker disinflation of the early 1980s.

I find that a scary comparison. . . .

[I]t shows what happened when the Fed got seriously “behind the curve” in 1974–75. . . . It then took 8 years, from 1975 to 1983, to reduce inflation to 4 percent.

And I find Blanchard’s comparison of the 1975-1983 period with the current situation problematic. First, he ignores the fact that the 1975-1983 episode did not display a steady rate of inflation or a uniform increase in inflation from 1975 until Volcker finally tamed it by way of the brutal 1981-82 recession. As I’ve explained previously in posts on the 1970s and 1980s (here, here, and here), and in chapters 7 and 8 of my book Studies in the History of Monetary Theory the 1970s inflation was the product of a series of inflationary demand-side and supply-shocks and misguided policy responses by the Fed, guided by politically motivated misconceptions, with little comprehension of the consequences of its actions.

It would be unwise to assume that the Fed will never embark on a similar march of folly, but it would be at least as unwise to adopt a proposed policy on the assumption that the alternative to that policy would be a repetition of the earlier march. What commentary on the 1970s largely overlooks is that there was an enormous expansion of the US labor force in that period as baby boomers came of age and as women began seeking and finding employment in steadily increasing numbers. The labor-force participation rate in the 1950s and 1960s fluctuated between about 58% to about 60%, mirroring fluctuations in the unemployment rate. Between 1970 and 1980 the labor force participation rate rose from just over 60% to just over 64% even as the unemployment rate rose from about 5% to over 7%. The 1970s were not, for the most part, a period of stagflation, but a period of inflation and strong growth interrupted by one deep recession (1974-75) and bookended by two minor recessions (1969-70) and (1979-80). But the rising trend of unemployment during the decade was largely attributable not to stagnation but to a rapidly expanding labor force and a rising labor participation rate.

The rapid increase in inflation in 1973 was largely a policy-driven error of the Nixon/Burns collaboration to ensure Nixon’s reelection in 1972 without bothering to taper the stimulus in 1973 after full employment was restored just in time for Nixon’s 1972 re-election. The oil shock of 1973-74 would have justified allowing a transitory period of increased inflation to cushion the negative effect of the increase in energy prices and to dilute the real magnitude of the nominal increase in oil prices. But the combined effect of excess aggregate demand and a negative supply shock led to an exaggerated compensatory tightening of monetary policy that led to the unnecessarily deep and prolonged recession in 1974-75.

A strong recovery ensued after the recession which, not surprisingly, was associated with declining inflation that fell below 5% in 1976. However, owing to the historically high rate of unemployment, only partially attributable to the previous recession, the incoming Carter administration promoted expansionary fiscal and monetary policies, which Arthur Burns, hoping to be reappointed by Carter to another term as Fed Chairman, willingly implemented. Rather than continue on the downward inflationary trend inherited from the previous administration, inflation resumed its upward trend in 1977.

Burns’s hopes to be reappointed by Carter were disappointed, but his replacement G. William Miller made no effort to tighten monetary policy to reverse the upward trend in inflation. A second oil shock in 1979 associated with the Iranian Revolution and the taking of US hostages in Iran caused crude oil prices over the course in 1979 to more than double. Again, the appropriate monetary-policy response was not to tighten monetary policy but to accommodate the price increase without causing a recession.

However, by the time of the second oil shock in 1979, inflation was already in the high single digits. The second oil shock, combined with the disastrous effects of the controls on petroleum prices carried over from the Nixon administration, created a crisis atmosphere that allowed the Reagan administration, with the cooperation of Paul Volcker, to implement a radical Monetarist anti-inflation policy. The policy was based on the misguided presumption that keeping the rate of growth of some measure of the money stock below a 5% annual rate would cure inflation with little effect on the overall economy if it were credibly implemented.

Volcker’s reputation was such that it was thought by supporters of the policy that his commitment would be relied upon by the public, so that a smooth transition to a lower rate of inflation would follow, and any downturn would be mild and short-lived. But the result was an unexpectedly deep and long-lasting recession.

The recession was needlessly prolonged by the grave misunderstanding of the causal relationship between the monetary aggregates and macroeconomic performance that had been perpetrated by Milton Friedman’s anti-Keynesian Monetarist counterrevolution. After triggering the sharpest downturn of the postwar era, the Monetarist anti-inflation strategy adopted by Volcker was, in the summer of 1982, on the verge of causing a financial crisis before Volcker announced that the Fed would no longer try to target any of the monetary aggregates, an announcement that triggered an immediate stock-market boom and, within a few months, the start of an economic recovery.

Thus, Blanchard is wrong to compare our current situation to the entire 1975-1983 period. The current situation, rather, is similar to the situation in 1973, when an economy, in the late stages of a recovery with rising inflation, was subjected to a severe supply shock. The appropriate response to that supply shock was not to tighten monetary policy, but merely to draw down the monetary stimulus of the previous two years. However, the Fed, perhaps shamed by the excessive, and politically motivated, monetary expansion of the previous two years, overcompensated by tightening monetary policy to counter the combined inflationary impact of its own previous policy and the recent oil price increase, immediately triggering the sharpest downturn of the postwar era. That is the lesson to draw from the 1970s, and it’s a mistake that the Fed ought not repeat now.

The Catch-Up Problem: Are Rapidly Rising Wages a Ticking Time-Bomb

Blanchard is worried that, because price increases exceeded wage increases in 2021, causing real wages to fall in 2021, workers will rationally assume, and demand, that their nominal wages will rise in 2022 to compensate for the decline in real wages, thereby fueling a further increase in inflation. This is a familiar argument based on the famous short-run Phillips-Curve trade-off between inflation and unemployment. Reduced unemployment resulting from the real-wage reduction associated with inflation will cause inflation to increase.

This argument is problematic on at least two levels. First, it presumes that the Phillips Curve represents a structural relationship, when it is merely a reduced form, just as an observed relationship between the price of a commodity and sales of that commodity is a reduced form, not a demand curve. Inferences cannot be made from a reduced form about the effect of a price change, nor can inferences about the effect of inflation be made from the Phillips Curve.

But one needn’t resort to a somewhat sophisticated argument to see why Blanchard’s fears that wage catchup will lead to a further round of inflation are not well-grounded. Blanchard argues that business firms, having pocketed windfall profits from rising prices that have outpaced wage increases, will grant workers compensatory wage increases to restore workers’ real wages, while also increasing prices to compensate themselves for the increased wages that they have agreed to pay their workers.

I’m sorry, but with all due respect to Professor Blanchard, that argument makes no sense. Evidently, firms have generally enjoyed a windfall when market conditions allowed them to raise prices without raising wages. Why, if wages finally catch up to prices, will they raise prices again? Either firms can choose, at will, how much profit to make when they set prices or their prices are constrained by market forces. If Professor Blanchard believes that firms can simply choose how much profit they make when they set prices, then he seems to be subscribing to Senator Warren’s theory of inflation: that inflation is caused by corporate greed. If he believes that, in setting prices, firms are constrained by market forces, then the mere fact that market conditions allowed them to increase prices faster than wages rose in 2021 does not mean that, if market conditions cause wages to rise at a faster rate than they did in 2022, firms, after absorbing those wage increases, will automatically be able to maintain their elevated profit margins in 2022 by raising prices in 2022 correspondingly.

The market conditions facing firms in 2022 will be determined by, among other things, the monetary policy of the Fed. Whether firms are able to raise prices in 2022 as fast as wages rise in 2022 will depend on the monetary policy adopted by the Fed. If the Fed’s monetary policy aims at gradually slowing down the rate of increase in nominal GDP in 2022 from the 2021 rate of increase, firms overall will not easily be able to raise prices as fast as wages rise in 2022. But why should anyone expect that firms that enjoyed windfall profits from inflation in 2021 will be able to continue enjoying those elevated profits in perpetuity?

Professor Blanchard posits simple sectoral equations for the determination of the rate of wage increases and for the rate of price increases given the rate of wage increases. This sort of one-way causality is much too simplified and ignores the fundamental fact all prices and wages and expectations of future prices and wages are mutually determined in a simultaneous system. One can’t reason from a change in a single variable and extrapolate from that change how the rest of the system will adjust.

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,232 other followers

Follow Uneasy Money on WordPress.com