Archive for the 'Hayek' Category

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Hayek and the Lucas Critique

In March I wrote a blog post, “Robert Lucas and the Pretense of Science,” which was a draft proposal for a paper for a conference on Coordination Issues in Historical Perspectives to be held in September. My proposal having been accepted I’m going to post sections of the paper on the blog in hopes of getting some feedback as a write the paper. What follows is the first of several anticipated draft sections.

Just 31 years old, F. A. Hayek rose rapidly to stardom after giving four lectures at the London School of Economics at the invitation of his almost exact contemporary, and soon to be best friend, Lionel Robbins. Hayek had already published several important works, of which Hayek ([1928], 1984) laying out basic conceptualization of an intertemporal equilibrium almost simultaneously with the similar conceptualizations of two young Swedish economists, Gunnar Myrdal (1927) and Erik Lindahl [1929] 1939), was the most important.

Hayek’s (1931a) LSE lectures aimed to provide a policy-relevant version of a specific theoretical model of the business cycle that drew upon but was a just a particular instantiation of the general conceptualization developed in his 1928 contribution. Delivered less than two years after the start of the Great Depression, Hayek’s lectures gave a historical overview of the monetary theory of business-cycles, an account of how monetary disturbances cause real effects, and a skeptical discussion of how monetary policy might, or more likely might not, counteract or mitigate the downturn then underway. It was Hayek’s skepticism about countercyclical policy that helped make those lectures so compelling but also elicited such a hostile reaction during the unfolding crisis.

The extraordinary success of his lectures established Hayek’s reputation as a preeminent monetary theorist alongside established figures like Irving Fisher, A. C. Pigou, D. H. Robertson, R. G. Hawtrey, and of course J. M. Keynes. Hayek’s (1931b) critical review of Keynes’s just published Treatise on Money (1930), published soon after his LSE lectures, provoking a heated exchange with Keynes, himself, showed him to be a skilled debater and a powerful polemicist.

Hayek’s meteoric rise was, however, followed by a rapid fall from the briefly held pinnacle of his early career. Aside from the imperfections and weaknesses of his own theoretical framework (Glasner and Zimmerman 2021), his diagnosis of the causes of the Great Depression (Glasner and Batchelder [1994] 2021a, 2021b) and his policy advice (Glasner 2021) were theoretically misguided and inappropriate to the deflationary conditions underlying the Great Depression).

Nevertheless, Hayek’s conceptualization of intertemporal equilibrium provided insight into the role not only of prices, but also of price expectations, in accounting for cyclical fluctuations. In Hayek’s 1931 version of his cycle theory, the upturn results from bank-financed investment spending enabled by monetary expansion that fuels an economic boom characterized by increased total spending, output and employment. However, owing to resource constraints, misalignments between demand and supply, and drains of bank reserves, the optimistic expectations engendered by the boom are doomed to eventual disappointment, whereupon a downturn begins.

I need not engage here with the substance of Hayek’s cycle theory which I have criticized elsewhere (see references above). But I would like to consider his 1934 explanation, responding to Hansen and Tout (1933), of why a permanent monetary expansion would be impossible. Hansen and Tout disputed Hayek’s contention that monetary expansion would inevitably lead to a recession, because an unconstrained monetary authority would not be forced by a reserve drain to halt a monetary expansion, allowing a boom to continue indefinitely, permanently maintaining an excess of investment over saving.

Hayek (1934) responded as follows:

[A] constant rate of forced saving (i.e., investment in excess of voluntary saving) a rate of credit expansion which will enable the producers of intermediate products, during each successive unit of time, to compete successfully with the producers of consumers’ goods for constant additional quantities of the original factors of production. But as the competing demand from the producers of consumers’ goods rises (in terms of money) in consequence of, and in proportion to, the preceding increase of expenditure on the factors of production (income), an increase of credit which is to enable the producers of intermediate products to attract additional original factors, will have to be, not only absolutely but even relatively, greater than the last increase which is now reflected in the increased demand for consumers’ goods. Even in order to attract only as great a proportion of the original factors, i.e., in order merely to maintain the already existing capital, every new increase would have to be proportional to the last increase, i.e., credit would have to expand progressively at a constant rate. But in order to bring about constant additions to capital, it would have to do more: it would have to increase at a constantly increasing rate. The rate at which this rate of increase must increase would be dependent upon the time lag between the first expenditure of the additional money on the factors of production and the re-expenditure of the income so created on consumers’ goods. . . .

But I think it can be shown . . . that . . . such a policy would . . . inevitably lead to a rapid and progressive rise in prices which, in addition to its other undesirable effects, would set up movements which would soon counteract, and finally more than offset, the “forced saving.” That it is impossible, either for a simple progressive increase of credit which only helps to maintain, and does not add to, the already existing “forced saving,” or for an increase in credit at an increasing rate, to continue for a considerable time without causing a rise in prices, results from the fact that in neither case have we reason to assume that the increase in the supply of consumers’ goods will keep pace with the increase in the flow of money coming on to the market for consumers’ goods. Insofar as, in the second case, the credit expansion leads to an ultimate increase in the output of consumers’ goods, this increase will lag considerably and increasingly (as the period of production increases) behind the increase in the demand for them. But whether the prices of consumers’ goods will rise faster or slower, all other prices, and particularly the prices of the original factors of production, will rise even faster. It is only a question of time when this general and progressive rise of prices becomes very rapid. My argument is not that such a development is inevitable once a policy of credit expansion is embarked upon, but that it has to be carried to that point if a certain result—a constant rate of forced saving, or maintenance without the help of voluntary saving of capital accumulated by forced saving—is to be achieved.

Friedman’s (1968) argument why monetary expansion could not permanently reduce unemployment below its “natural rate” closely mirrors (though he almost certainly never read) Hayek’s argument that monetary expansion could not permanently maintain a rate of investment spending above the rate of voluntary saving. Generalizing Friedman’s logic, Lucas (1976) transformed it into a critique of using econometric estimates of relationships like the Phillips Curve, the specific target of Friedman’s argument, as a basis for predicting the effects of policy changes, such estimates being conditional on implicit expectational assumptions which aren’t invariant to the policy changes derived from those estimates.

Restated differently, such econometric estimates are reduced forms that, without identifying restrictions, do not allow the estimated regression coefficients to be used to predict the effects of a policy change.

Only by specifying, and estimating, the deep structural relationships governing the response to a policy change could the effect of a potential policy change be predicted with some confidence that the prediction would not prove erroneous because of changes in the econometrically estimated relationships once agents altered their behavior in response to the policy change.

In his 1974 Nobel Lecture, Hayek offered a similar explanation of why an observed correlation between aggregate demand and employment provides no basis for predicting the effect of policies aimed at increasing aggregate demand and reducing unemployment if the likely changes in structural relationships caused by those policies are not taken into account.

[T]he very measures which the dominant “macro-economic” theory has recommended as a remedy for unemployment, namely the increase of aggregate demand, have become a cause of a very extensive misallocation of resources which is likely to make later large-scale unemployment inevitable. The continuous injection . . . money at points of the economic system where it creates a temporary demand which must cease when the increase of the quantity of money stops or slows down, together with the expectation of a continuing rise of prices, draws labour . . . into employments which can last only so long as the increase of the quantity of money continues at the same rate – or perhaps even only so long as it continues to accelerate at a given rate. What this policy has produced is not so much a level of employment that could not have been brought about in other ways, as a distribution of employment which cannot be indefinitely maintained . . . The fact is that by a mistaken theoretical view we have been led into a precarious position in which we cannot prevent substantial unemployment from re-appearing; not because . . . this unemployment is deliberately brought about as a means to combat inflation, but because it is now bound to occur as a deeply regrettable but inescapable consequence of the mistaken policies of the past as soon as inflation ceases to accelerate.

Hayek’s point that an observed correlation between the rate of inflation (a proxy for aggregate demand) and unemployment cannot be relied on in making economic policy was articulated succinctly and abstractly by Lucas as follows:

In short, one can imagine situations in which empirical Phillips curves exhibit long lags and situations in which there are no lagged effects. In either case, the “long-run” output inflation relationship as calculated or simulated in the conventional way has no bearing on the actual consequences of pursing a policy of inflation.

[T]he ability . . . to forecast consequences of a change in policy rests crucially on the assumption that the parameters describing the new policy . . . are known by agents. Over periods for which this assumption is not approximately valid . . . empirical Phillips curves will appear subject to “parameter drift,” describable over the sample period, but unpredictable for all but the very near future.

The lesson inferred by both Hayek and Lucas was that Keynesian macroeconomic models of aggregate demand, inflation and employment can’t reliably guide economic policy and should be discarded in favor of models more securely grounded in the microeconomic theories of supply and demand that emerged from the Marginal Revolution of the 1870s and eventually becoming the neoclassical economic theory that describes the characteristics of an efficient, decentralized and self-regulating economic system. This was the microeconomic basis on which Hayek and Lucas believed macroeconomic theory ought to be based instead of the Keynesian system that they were criticizing. But that superficial similarity obscures the profound methodological and substantive differences between them.

Those differences will be considered in future posts.

References

Friedman, M. 1968. “The Role of Monetary Policy.” American Economic Review 58(1):1-17.

Glasner, D. 2021. “Hayek, Deflation, Gold and Nihilism.” Ch. 16 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. [1994] 2021. “Debt, Deflation, the Gold Standard and the Great Depression.” Ch. 13 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. 2021. “Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel?” Ch. 14 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Zimmerman, P. 2021.  “The Sraffa-Hayek Debate on the Natural Rate of Interest.” Ch. 15 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Hansen, A. and Tout, H. 1933. “Annual Survey of Business Cycle Theory: Investment and Saving in Business Cycle Theory,” Econometrica 1(2): 119-47.

Hayek, F. A. [1928] 1984. “Intertemporal Price Equilibrium and Movements in the Value of Money.” In R. McCloughry (Ed.), Money, Capital and Fluctuations: Early Essays (pp. 171–215). Routledge.

Hayek, F. A. 1931a. Prices and Produciton. London: Macmillan.

Hayek, F. A. 1931b. “Reflections on the Pure Theory of Money of Mr. Keynes.” Economica 33:270-95.

Hayek, F. A. 1934. “Capital and Industrial Fluctuations.” Econometrica 2(2): 152-67.

Keynes, J. M. 1930. A Treatise on Money. 2 vols. London: Macmillan.

Lindahl. E. [1929] 1939. “The Place of Capital in the Theory of Price.” In E. Lindahl, Studies in the Theory of Money and Capital. George, Allen & Unwin.

Lucas, R. E. [1976] 1985. “Econometric Policy Evaluation: A Critique.” In R. E. Lucas, Studies in Business-Cycle Theory. Cambridge: MIT Press.

Myrdal, G. 1927. Prisbildningsproblemet och Foranderligheten (Price Formation and the Change Factor). Almqvist & Wicksell.

Hayek Refutes Banana Republican Followers of Scalia Declaring War on Unenumerated Rights

Though overshadowed by the towering obnoxiousness of their questioning of Judge Katanji Brown Jackson in her confirmation hearings last week, the Banana Republicans on the Senate Judiciary Committee signaled that their goals for remaking American Constitutional Jurisprudence extend far beyond overturning the Roe v. Wade; they will be satisfied with nothing less than the evisceration of all unenumerated Constitutional rights that the Courts have found over the past two centuries. The idea that rights exist only insofar as they are explicitly recognized and granted by written legislative or Constitutional enactment, as understood at the moment of enactment, is the bedrock on which Justice Scalia founded his jurisprudential doctrine.

The idea was clearly rejected by the signatories of the Declaration of Independence, which in its second sentence declared:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty and the pursuit of happiness.

Clearly the Declaration believed that individual rights exist independently of any legislative or Constitutional enactment. Moreover the three rights listed by the Declaration: rights to life, liberty and the pursuit of happiness are not exhaustive, but are only among a longer list of unenumerated rights endowed to individuals by their Creator. Rejecting the idea, of natural or moral rights to which individuals are entitled by virtue of their humanity, Scalia adopted the positivist position that all law is an expression of the will of the sovereign, which, in the United States, is in some abstract sense “the people” as expressed through the Constitution (including its Amendments), and through legislation by Congress and state legislatures.

Treating Scalia’s doctrine as controlling, the Banana Republicans regard all judicial decisions that invalidate legislative enactments based on the existence of individual rights not explicitly enumerated in the Constitution as fundamentally illegitimate and worthy of being overruled by suitably right-thinking judges.

Not only is Scalia’s doctrine fundamentally at odds with the Declaration of Independence, which has limited legal force, it is directly contradicted by the Ninth Amendment to the Constitution which states:

The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.

So, the Ninth Amendment explicitly negates the Scalian doctrine that the only rights to which individuals have a legal claim are those explicitly enumerated by the Constitution. Scalia’s jurisprudential predecessor, Robert Bork, whose originalist philosophy Scalia revised and restated in a more palatable form, dismissed the Ninth Amendment as unintelligible, and, therefore, essentially a nullity. Scalia, himself, was unwilling to call it unintelligible, but came up with the following, hardly less incoherent, rationale, reeking of bad faith, for relegating the Ninth Amendment to the ash heap of history:

He should apply the Ninth Amendment as it is written. And I apply it rigorously; I do not deny or disparage the existence of other rights in the sense of natural rights. That’s what the framers meant by that. Just because we’ve listed some rights of the people here doesn’t mean that we don’t believe that people have other rights. And if you try to take them away, we will revolt. And a revolt will be justified. It was the framers’ expression of their belief in natural law. But they did not put it in the charge of the courts to enforce.

https://lareviewofbooks.org/article/reading-the-text-an-interview-with-justice-antonin-scalia-of-the-u-s-supreme-court/

If Scalia had been honest, he would have said “He cannot apply the Ninth Amendment as it is written. And I rigorously do not apply it.” I mean what could Scalia, or any judge in thrall to Scalian jurisprudence, possibly do with the Ninth Amendment after saying: “But [the framers] did not put [the Ninth Amendment] in the charge of the courts to enforce”? After all, according to the estimable [sarcasm alert] Mr. Justice Scalia, the Ninth Amendment was added to the Constitution to grant the citizenry — presumably exercising their Second Amendment rights and implementing Second Amendment remedies — a right to overthrow the government that the framers were, at that very moment, ordaining and establishing.

In The Constitution of Liberty, F. A. Hayek provided an extended analysis of the U. S. Constitution and why a Bill of Rights was added as a condition of its ratification in 1788. His discussion of the Ninth Amendment demolishes Scalia’s nullification of the Ninth Amendment. Here is an extended quotation:

Hayek The Constitution of Liberty, pp. 185-86

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.

The Explanatory Gap and Mengerian Subjectivism

My last several posts have been focused on Marshall and Walras and the relationships and differences between the partial equilibrium approach of Marshall and the general-equilibrium approach of Walras and how that current state of neoclassical economics is divided between the more practical applied approach of Marshallian partial-equilibrium analysis and the more theoretical general-equilibrium approach of Walras. The divide is particularly important for the history of macroeconomics, because many of the macroeconomic controversies in the decades since Keynes have also involved differences between Marshallians and Walrasians. I’m not happy with either the Marshallian or Walrasian approach, and I have been trying to articulate my unhappiness with both branches of current neoclassical thinking by going back to the work of the forgotten marginal revolutionary, Carl Menger. I’ve been writing a paper for a conference later this month celebrating the 150th anniversary of Menger’s great work which draws on some of my recent musings, because I think it offers at least some hints at how to go about developing an improved neoclassical theory. Here’s a further sampling of my thinking which is drawn from one of the sections of my work in progress.

Both the Marshallian and the Walrasian versions of equilibrium analysis have failed to bridge an explanatory gap between the equilibrium state, whose existence is crucial for such empirical content as can be claimed on behalf of those versions of neoclassical theory, and such an equilibrium state could ever be attained. The gap was identified by one of the chief architects of modern neoclassical theory, Kenneth Arrow, in his 1958 paper “Toward a Theory of Price Adjustment.”

The equilibrium is defined in terms of a set of prices. In the Marshallian version, the equilibrium prices are assumed to have already been determined in all but a single market (or perhaps a subset of closely related markets), so that the Marshallian equilibrium simply represents how, in a single small or isolated market, an equilibrium price in that market is determined, under suitable ceteris-paribus conditions thereby leaving the equilibrium prices determined in other markets unaffected.

In the Walrasian version, all prices in all markets are determined simultaneously, but the method for determining those prices simultaneously was not spelled out by Walras other than by reference to the admittedly fictitious and purely heuristic tâtonnement process.

Both the Marshallian and Walrasian versions can show that equilibrium has optimal properties, but neither version can explain how the equilibrium is reached or how it can be discovered in practice. This is true even in the single-period context in which the Walrasian and Marshallian equilibrium analyses were originally carried out.

The single-period equilibrium has been extended, at least in a formal way, in the standard Arrow-Debreu-McKenzie (ADM) version of the Walrasian equilibrium, but this version is in important respects just an enhanced version of a single-period model inasmuch as all trades take place at time zero in a complete array of future state-contingent markets. So it is something of a stretch to consider the ADM model a truly intertemporal model in which the future can unfold in potentially surprising ways as opposed to just playing out a script already written in which agents go through the motions of executing a set of consistent plans to produce, purchase and sell in a sequence of predetermined actions.

Under less extreme assumptions than those of the ADM model, an intertemporal equilibrium involves both equilibrium current prices and equilibrium expected prices, and just as the equilibrium current prices are the same for all agents, equilibrium expected future prices must be equal for all agents. In his 1937 exposition of the concept of intertemporal equilibrium, Hayek explained the difference between what agents are assumed to know in a state of intertemporal equilibrium and what they are assumed to know in a single-period equilibrium.

If all agents share common knowledge, it may be plausible to assume that they will rationally arrive at similar expectations of the future prices. But if their stock of knowledge consists of both common knowledge and private knowledge, then it seems implausible to assume that the price expectations of different agents will always be in accord. Nevertheless, it is not necessarily inconceivable, though perhaps improbable, that agents will all arrive at the same expectations of future prices.

In the single-period equilibrium, all agents share common knowledge of equilibrium prices of all commodities. But in intertemporal equilibrium, agents lack knowledge of the future, but can only form expectations of future prices derived from their own, more or less accurate, stock of private knowledge. However, an equilibrium may still come about if, based on their private knowledge, they arrive at sufficiently similar expectations of future prices for their plans for their current and future purchases and sales to be mutually compatible.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that there is no neoclassical theory of how competitive prices can change, Milgrom and Stokey turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at prices other than equilibrium prices, so that it would be impossible for a trader with private information to take advantage of that information. This argument seems to suffer from a widely shared misunderstanding of what rational expectations signify.

Thus, in the Mengerian view articulated by Hayek, intertemporal equilibrium, given the diversity of private knowledge and expectations, is an unlikely, but not inconceivable, state of affairs, a view that stands in sharp contrast to the argument of Paul Milgrom and Nancy Stokey (1982), in which they argue that under a rational-expectations equilibrium there is no private knowledge, only common knowledge, and that it would be impossible for any trader to trade on private knowledge, because no other trader with rational expectations would be willing to trade with anyone at a price other than the equilibrium price.

Rational expectations is not a property of individual agents making rational and efficient use of the information from whatever source it is acquired. As I have previously explained here (and a revised version here) rational expectations is a property of intertemporal equilibrium; it is not an intrinsic property that agents have by virtue of being rational, just as the fact that the three angles in a triangle sum to 180 degrees is not a property of the angles qua angles, but a property of the triangle. When the expectations that agents hold about future prices are identical, their expectations are equilibrium expectations and they are rational. That the agents hold rational expectations in equilibrium, does not mean that the agents are possessed of the power to calculate equilibrium prices or even to know if their expectations of future prices are equilibrium expectations. Equilibrium is the cause of rational expectations; rational expectations do not exist if the conditions for equilibrium aren’t satisfied. See Blume, Curry and Easley (2006).

The assumption, now routinely regarded as axiomatic, that rational expectations is sufficient to ensure that equilibrium is automatic achieved, and that agents’ price expectations necessarily correspond to equilibrium price expectations is a form of question begging disguised as a methodological imperative that requires all macroeconomic models to be properly microfounded. The newly published volume edited by Arnon, Young and van der Beek Expectations: Theory and Applications from Historical Perspectives contains a wonderful essay by Duncan Foley that elucidates these issues.

In his centenary retrospective on Menger’s contribution, Hayek (1970), commenting on the inexactness of Menger’s account of economic theory, focused on Menger’s reluctance to embrace mathematics as an expository medium with which to articulate economic-theoretical concepts. While this may have been an aspect of Menger’s skepticism about mathematical reasoning, his recognition that expectations of the future are inherently inexact and conjectural and more akin to a range of potential outcomes of different probability may have been an even more significant factor in how Menger chose to articulate his theoretical vision.

But it is noteworthy that Hayek (1937) explicitly recognized that there is no theoretical explanation that accounts for any tendency toward intertemporal equilibrium, and instead merely (and in 1937!) relied an empirical tendency of economies to move in the direction of equilibrium as a justification for considering economic theory to have any practical relevance.

On the Price Specie Flow Mechanism

I have been working on a paper tentatively titled “The Smithian and Humean Traditions in Monetary Theory.” One section of the paper is on the price-specie-flow mechanism, about which I wrote last month in my previous post. This section develops the arguments of the previous post at greater length and draws on a number of earlier posts that I’ve written about PSFM as well (e.g., here and here )provides more detailed criticisms of both PSFM and sterilization and provides some further historical evidence to support some of the theoretical arguments. I will be grateful for any comments and feedback.

The tortured intellectual history of the price-specie-flow mechanism (PSFM) received its still classic exposition in a Hume (1752) essay, which has remained a staple of the theory of international adjustment under the gold standard, or any international system of fixed exchange rates. Regrettably, the two-and-a-half-century life span of PSFM provides no ground for optimism about the prospects for progress in what some are pleased to call without irony economic science.

PSFM describes how, under a gold standard, national price levels tend to be equalized, with deviations between the national price levels in any two countries inducing gold to be shipped from the country with higher prices to the one with lower prices until prices are equalized. Premised on a version of the quantity theory of money in which (1) the price level in each country on the gold standard is determined by the quantity of money in that country, and (2) money consists entirely in gold coin or bullion, Hume elegantly articulated a model of disturbance and equilibration after an exogenous change in the gold stock in one country.

Viewing banks as inflationary engines of financial disorder, Hume disregarded banks and the convertible monetary liabilities of banks in his account of PSFM, leaving to others the task of describing the international adjustment process under a gold standard with fractional-reserve banking. The task of devising an institutional framework, within which PSFM could operate, for a system of fractional-reserve banking proved to be problematic and ultimately unsuccessful.

For three-quarters of a century, PSFM served a purely theoretical function. During the Bullionist debates of the first two decades of the nineteenth century, triggered by the suspension of the convertibility of the pound sterling into gold in 1797, PSFM served as a theoretical benchmark not a guide for policy, it being generally assumed that, when convertibility was resumed, international monetary equilibrium would be restored automatically.

However, the 1821 resumption was followed by severe and recurring monetary disorders, leading some economists, who formed what became known as the Currency School, to view PSFM as a normative criterion for ensuring smooth adjustment to international gold flows. That criterion, the Currency Principle, stated that the total currency in circulation in Britain should increase or decrease by exactly as much as the amount of gold flowing into or out of Britain.[1]

The Currency Principle was codified by the Bank Charter Act of 1844. To mimic the Humean mechanism, it restricted, but did not suppress, the right of note-issuing banks in England and Wales, which were allowed to continue issuing notes, at current, but no higher, levels, without holding equivalent gold reserves. Scottish and Irish note-issuing banks were allowed to continue issuing notes, but could increase their note issue only if matched by increased holdings of gold or government debt. In England and Wales, the note issue could increase only if gold was exchanged for Bank of England notes, so that a 100-percent marginal gold reserve requirement was imposed on additional banknotes.

Opposition to the Bank Charter Act was led by the Banking School, notably John Fullarton and Thomas Tooke. Rejecting the Humean quantity-theoretic underpinnings of the Currency School and the Bank Charter Act, the Banking School rejected the quantitative limits of the Bank Charter Act as both unnecessary and counterproductive, because banks, obligated to redeem their liabilities directly or indirectly in gold, issue liabilities only insofar as they expect those liabilities to be willingly held by the public, or, if not, are capable of redeeming any liabilities no longer willingly held. Rather than the Humean view that banks issue banknotes or create deposits without constraint, the Banking School held Smith’s view that banks issue money in a form more convenient to hold and to transact with than metallic money, so that bank money allows an equivalent amount of gold to be shifted from monetary to real (non-monetary) uses, providing a net social savings. For a small open economy, the diversion (and likely export) of gold bullion from monetary to non-monetary uses has negligible effect on prices (which are internationally, not locally, determined).

The quarter century following enactment of the Bank Charter Act showed that the Act had not eliminated monetary disturbances, the government having been compelled to suspend the Act in 1847, 1857 and 1866 to prevent incipient crises from causing financial collapse. Indeed, it was precisely the fear that liquidity might not be forthcoming that precipitated increased demands for liquidity that the Act made it impossible to accommodate. Suspending the Act was sufficient to end the crises with limited intervention by the Bank. [check articles on the crises of 1847, 1857 and 1866.]

It may seem surprising, but the disappointing results of the Bank Charter Act provided little vindication to the Banking School. It led only to a partial, uneasy, and not entirely coherent, accommodation between PSFM doctrine and the reality of a monetary system in which the money stock consists mostly of banknotes and bank deposits issued by fractional-reserve banks. But despite the failure of the Bank Charter Act, PSFM achieved almost canonical status, continuing, albeit with some notable exceptions, to serve as the textbook model of the gold standard.

The requirement that gold flows induce equal changes in the quantity of money within a country into (or from) which gold is flowing was replaced by an admonition that gold flows lead to “appropriate” changes in the central-bank discount rate or an alternative monetary instrument to cause the quantity of money to change in the same direction as the gold flow. While such vague maxims, sometimes described as “the rules of the game,” gave only directional guidance about how to respond to change in gold reserves, their hortatory character, and avoidance of quantitative guidance, allowed monetary authorities latitude to avoid the self-inflicted crises that had resulted from the quantitative limits of the Bank Charter Act.

Nevertheless, the myth of vague “rules” relating the quantity of money in a country to changes in gold reserves, whose observance ensured the smooth functioning of the international gold standard before its collapse at the start of World War I, enshrined PSFM as the theoretical paradigm for international monetary adjustment under the gold standard.

That paradigm was misconceived in four ways that can be briefly summarized.

  • Contrary to PSFM, changes in the quantity of money in a gold-standard country cannot change local prices proportionately, because prices of tradable goods in that country are constrained by arbitrage to equal the prices of those goods in other countries.
  • Contrary to PSFM, changes in local gold reserves are not necessarily caused either by non-monetary disturbances such as shifts in the terms of trade between countries or by local monetary disturbances (e.g. overissue by local banks) that must be reversed or counteracted by central-bank policy.
  • Contrary to PSFM, changes in the national price levels of gold-standard countries were uncorrelated with gold flows, and changes in national price levels were positively, not negatively, correlated.
  • Local banks and monetary authorities exhibit their own demands for gold reserves, demands exhibited by choice (i.e., independent of legally required gold holdings) or by law (i.e., by legally requirement to hold gold reserves equal to some fraction of banknotes issued by banks or monetary authorities). Such changes in gold reserves may be caused by changes in the local demands for gold by local banks and the monetary authorities in one or more countries.

Many of the misconceptions underlying PSFM were identified by Fullarton’s refutation of the Currency School. In articulating the classical Law of Reflux, he established the logical independence of the quantity convertible money in a country from by the quantity of gold reserves held by the monetary authority. The gold reserves held by individual banks, or their deposits with the Bank of England, are not the raw material from which banks create money, either banknotes or deposits. Rather, it is their creation of banknotes or deposits when extending credit to customers that generates a derived demand to hold liquid assets (i.e., gold) to allow them to accommodate the demands of customers and other banks to redeem banknotes and deposits. Causality runs from creating banknotes and deposits to holding reserves, not vice versa.

The misconceptions inherent in PSFM and the resulting misunderstanding of gold flows under the gold standard led to a further misconception known as sterilization: the idea that central banks, violating the obligations imposed by “the rules of the game,” do not allow, or deliberately prevent, local money stocks from changing as their gold holdings change. The misconception is the presumption that gold inflows ought necessarily cause increases in local money stocks. The mechanisms causing local money stocks to change are entirely different from those causing gold flows. And insofar as those mechanisms are related, causality flows from the local money stock to gold reserves, not vice versa.

Gold flows also result when monetary authorities transform their own asset holdings into gold. Notable examples of such transformations occurred in the 1870s when a number of countries abandoned their de jure bimetallic (and de facto silver) standards to the gold standard. Monetary authorities in those countries transformed silver holdings into gold, driving the value of gold up and silver down. Similarly, but with more catastrophic consequences, the Bank of France, in 1928 after France restored the gold standard, began redeeming holdings of foreign-exchange reserves (financial claims on the United States or Britain, payable in gold) into gold. Following the French example, other countries rejoining the gold standard redeemed foreign exchange for gold, causing gold appreciation and deflation that led to the Great Depression.

Rereading the memoirs of this splendid translation . . . has impressed me with important subtleties that I missed when I read the memoirs in a language not my own and in which I am far from completely fluent. Had I fully appreciated those subtleties when Anna Schwartz and I were writing our A Monetary History of the United States, we would likely have assessed responsibility for the international character of the Great Depression somewhat differently. We attributed responsibility for the initiation of a worldwide contraction to the United States and I would not alter that judgment now. However, we also remarked, “The international effects were severe and the transmission rapid, not only because the gold-exchange standard had rendered the international financial system more vulnerable to disturbances, but also because the United States did not follow gold-standard rules.” Were I writing that sentence today, I would say “because the United States and France did not follow gold-standard rules.”

I pause to note for the record Friedman’s assertion that the United States and France did not follow “gold-standard rules.” Warming up to the idea, he then accused them of sterilization.

Benjamin Strong and Emile Moreau were admirable characters of personal force and integrity. But . . .the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community. We stressed that the U.S. “did not permit the inflow of gold to expand the U.S. money stock. We not only sterilized it, we went much further. Our money stock moved perversely, going down as the gold stock went up” from 1929 to 1931.

Strong and Moreau tried to reconcile two ultimately incompatible objectives: fixed exchange rates and internal price stability. Thanks to the level at which Britain returned to gold in 1925, the U.S. dollar was undervalued, and thanks to the level at which France returned to gold at the end of 1926, so was the French franc. Both countries as a result experienced substantial gold inflows. Gold-standard rules called for letting the stock of money rise in response to the gold inflows and for price inflation in the U.S. and France, and deflation in Britain, to end the over-and under-valuations. But both Strong and Moreau were determined to prevent inflation and accordingly both sterilized the gold inflows, preventing them from providing the required increase in the quantity of money.

Friedman’s discussion of sterilization is at odds with basic theory. Working with a naïve version of PSFM, he imagines that gold flows passively respond to trade balances independent of monetary forces, and that the monetary authority under a gold standard is supposed to ensure that the domestic money stock varies roughly in proportion to its gold reserves. Ignoring the international deflationary dynamic, he asserts that the US money stock perversely declined from 1929 to 1931, while its gold stock increased. With a faltering banking system, the public shifted from holding demand deposits to currency. Gold reserves were legally required against currency, but not against demand deposits, so the shift from deposits to currency entailed an increase gold reserves. To be sure the increased US demand for gold added to upward pressure on value of gold, and to worldwide deflationary pressure. But US gold holdings rose by only $150 million from December 1929 to December 1931 compared with an increase of $1.06 billion in French gold holdings over the same period. Gold accumulation by the US and its direct contribution to world deflation during the first two years of the Depression was small relative to that of France.

Friedman also erred in stating “the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community.” The shock to the international community clearly originated not in the US but in France. The Fed could have absorbed and mitigated the shock by allowing a substantial outflow of its huge gold reserves, but instead amplified the shock by raising interest rates to nearly unprecedented levels, causing gold to flow into the US.

After correctly noting the incompatibility between fixed exchange rates and internal price stability, Friedman contradicts himself by asserting that, in seeking to stabilize their internal price levels, Strong and Moreau violated the gold-standard “rules,” as if it were rules, not arbitrage, that constrain national price to converge toward a common level under a gold standard.

Friedman’s assertion that, after 1925, the dollar was undervalued and sterling overvalued was not wrong. But he misunderstood the consequences of currency undervaluation and overvaluation under the gold standard, a confusion stemming from the underlying misconception, derived from PSFM, that foreign exchange rates adjust to balance trade flows, so that, in equilibrium, no country runs a trade deficit or trade surplus.

Thus, in Friedman’s view, dollar undervaluation and sterling overvaluation implied a US trade surplus and British trade deficit, causing gold to flow from Britain to the US. Under gold-standard “rules,” the US money stock and US prices were supposed to rise and the British money stock and British prices were supposed to fall until undervaluation and overvaluation were eliminated. Friedman therefore blamed sterilization of gold inflows by the Fed for preventing the necessary increase in the US money stock and price level to restore equilibrium. But, in fact, from 1925 through 1928, prices in the US were roughly stable and prices in Britain fell slightly. Violating gold-standard “rules” did not prevent the US and British price levels from converging, a convergence driven by market forces, not “rules.”

The stance of monetary policy in a gold-standard country had minimal effect on either the quantity of money or the price level in that country, which were mainly determined by the internationally determined value of gold. What the stance of national monetary policy determines under the gold standard is whether the quantity of money in the country adjusts to the quantity demanded by a process of domestic monetary creation or withdrawal or by the inflow or outflow of gold. Sufficiently tight domestic monetary policy restricting the quantify of domestic money causes a compensatory gold inflow increasing the domestic money stock, while sufficiently easy money causes a compensatory outflow of gold reducing the domestic money stock. Tightness or ease of domestic monetary policy under the gold standard mainly affected gold and foreign-exchange reserves, and, only minimally, the quantity of domestic money and the domestic price level.

However, the combined effects of many countries simultaneously tightening monetary policy in a deliberate, or even inadvertent, attempt to accumulate — or at least prevent the loss — of gold reserves could indeed drive up the international value of gold through a deflationary process affecting prices in all gold-standard countries. Friedman, even while admitting that, in his Monetary History, he had understated the effect of the Bank of France on the Great Depression, referred only the overvaluation of sterling and undervaluation of the dollar and franc as causes of the Great Depression, remaining oblivious to the deflationary effects of gold accumulation and appreciation.

It was thus nonsensical for Friedman to argue that the mistake of the Bank of France during the Great Depression was not to increase the quantity of francs in proportion to the increase of its gold reserves. The problem was not that the quantity of francs was too low; it was that the Bank of France prevented the French public from collectively increasing the quantity of francs that they held except by importing gold.

Unlike Friedman, F. A. Hayek actually defended the policy of the Bank of France, and denied that the Bank of France had violated “the rules of the game” after nearly quadrupling its gold reserves between 1928 and 1932. Under his interpretation of those “rules,” because the Bank of France increased the quantity of banknotes after the 1928 restoration of convertibility by about as much as its gold reserves increased, it had fully complied with the “rules.” Hayek’s defense was incoherent; under its legal obligation to convert gold into francs at the official conversion rate, the Bank of France had no choice but to increase the quantity of francs by as much as its gold reserves increased.

That eminent economists like Hayek and Friedman could defend, or criticize, the conduct of the Bank of France during the Great Depression, because the Bank either did, or did not, follow “the rules of the game” under which the gold standard operated, shows the uselessness and irrelevance of the “rules of the game” as a guide to policy. For that reason alone, the failure of empirical studies to find evidence that “the rules of the game” were followed during the heyday of the gold standard is unsurprising. But the deeper reason for that lack of evidence is that PSFM, whose implementation “the rules of the game” were supposed to guarantee, was based on a misunderstanding of the international-adjustment mechanism under either the gold standard or any fixed-exchange-rates system.

Despite the grip of PSFM over most of the profession, a few economists did show a deeper understanding of the adjustment mechanism. The idea that the price level in terms of gold directly constrained the movements of national price levels across countries was indeed recognized by writers as diverse as Keynes, Mises, and Hawtrey who all pointed out that the prices of internationally traded commodities were constrained by arbitrage and that the free movement of capital across countries would limit discrepancies in interest rates across countries attached to the gold standard, observations that had already been made by Smith, Thornton, Ricardo, Fullarton and Mill in the classical period. But, until the Monetary Approach to the Balance of Payments became popular in the 1970s, only Hawtrey consistently and systematically deduced the implications of those insights in analyzing both the Great Depression and the Bretton Woods system of fixed, but adjustable, exchange rates following World War II.

The inconsistencies and internal contradictions of PSFM were sometimes recognized, but usually overlooked, by business-cycle theorists when focusing on the disturbing influence of central banks, perpetuating mistakes of the Humean Currency School doctrine that attributed cyclical disturbances to the misbehavior of local banking systems that were inherently disposed to overissue their liabilities.

White and Hogan on Hayek and Cassel on the Causes of the Great Depression

Lawrence White and Thomas Hogan have just published a new paper in the Journal of Economic Behavior and Organization (“Hayek, Cassel, and the origins of the great depression”). Since White is a leading Hayek scholar, who has written extensively on Hayek’s economic writings (e.g., his important 2008 article “Did Hayek and Robbins Deepen the Great Depression?”) and edited the new edition of Hayek’s notoriously difficult volume, The Pure Theory of Capital, when it was published as volume 11 of the Collected Works of F. A. Hayek, the conclusion reached by the new paper that Hayek had a better understanding than Cassel of what caused the Great Depression is not, in and of itself, surprising.

However, I admit to being taken aback by the abstract of the paper:

We revisit the origins of the Great Depression by contrasting the accounts of two contemporary economists, Friedrich A. Hayek and Gustav Cassel. Their distinct theories highlight important, but often unacknowledged, differences between the international depression and the Great Depression in the United States. Hayek’s business cycle theory offered a monetary overexpansion account for the 1920s investment boom, the collapse of which initiated the Great Depression in the United States. Cassel’s warnings about a scarcity gold reserves related to the international character of the downturn, but the mechanisms he emphasized contributed little to the deflation or depression in the United States.

I wouldn’t deny that there are differences between the way the Great Depression played out in the United States and in the rest of the world, e.g., Britain and France, which to be sure, suffered less severely than did the US or, say, Germany. It is both possible, and important, to explore and understand the differential effects of the Great Depression in various countries. I am sorry to say that White and Hogan do neither. Instead, taking at face value the dubious authority of Friedman and Schwartz’s treatment of the Great Depression in the Monetary History of the United States, they assert that the cause of the Great Depression in the US was fundamentally different from the cause of the Great Depression in many or all other countries.

Taking that insupportable premise from Friedman and Schwartz, they simply invoke various numerical facts from the Monetary History as if those facts, in and of themselves, demonstrate what requires to be demonstrated: that the causes of the Great Depression in the US were different from those of the Great Depression in the rest of the world. That assumption vitiated the entire treatment of the Great Depression in the Monetary History, and it vitiates the results that White and Hogan reach about the merits of the conflicting explanations of the Great Depression offered by Cassel and Hayek.

I’ve discussed the failings of Friedman’s treatment of the Great Depression and of other episodes he analyzed in the Monetary History in previous posts (e.g., here, here, here, here, and here). The common failing of all the episodes treated by Friedman in the Monetary History and elsewhere is that he misunderstood how the gold standard operated, because his model of the gold standard was a primitive version of the price-specie-flow mechanism in which the monetary authority determines the quantity of money, which then determines the price level, which then determines the balance of payments, the balance of payments being a function of the relative price levels of the different countries on the gold standard. Countries with relatively high price levels experience trade deficits and outflows of gold, and countries with relatively low price levels experience trade surpluses and inflows of gold. Under the mythical “rules of the game” under the gold standard, countries with gold inflows were supposed to expand their money supplies, so that prices would rise and countries with outflows were supposed to reduce their money supplies, so that prices fall. If countries followed the rules, then an international monetary equilibrium would eventually be reached.

That is the model of the gold standard that Friedman used throughout his career. He was not alone; Hayek and Mises and many others also used that model, following Hume’s treatment in his essay on the balance of trade. But it’s the wrong model. The correct model is the one originating with Adam Smith, based on the law of one price, which says that prices of all commodities in terms of gold are equalized by arbitrage in all countries on the gold standard.

As a first approximation, under the Smithean model, there is only one price level adjusted for different currency parities for all countries on the gold standard. So if there is deflation in one country on the gold standard, there is deflation for all countries on the gold standard. If the rest of the world was suffering from deflation under the gold standard, the US was also suffering from a deflation of approximately the same magnitude as every other country on the gold standard was suffering.

The entire premise of the Friedman account of the Great Depression, adopted unquestioningly by White and Hogan, is that there was a different causal mechanism for the Great Depression in the United States from the mechanism operating in the rest of the world. That premise is flatly wrong. The causation assumed by Friedman in the Monetary History was the exact opposite of the actual causation. It wasn’t, as Friedman assumed, that the decline in the quantity of money in the US was causing deflation; it was the common deflation in all gold-standard countries that was causing the quantity of money in the US to decline.

To be sure there was a banking collapse in the US that was exacerbating the catastrophe, but that was an effect of the underlying cause: deflation, not an independent cause. Absent the deflationary collapse, there is no reason to assume that the investment boom in the most advanced and most productive economy in the world after World War I was unsustainable as the Hayekian overinvestment/malinvestment hypothesis posits with no evidence of unsustainability other than the subsequent economic collapse.

So what did cause deflation under the gold standard? It was the rapid increase in the monetary demand for gold resulting from the insane policy of the Bank of France (disgracefully endorsed by Hayek as late as 1932) which Cassel, along with Ralph Hawtrey (whose writings, closely parallel to Cassel’s on the danger of postwar deflation, avoid all of the ancillary mistakes White and Hogan attribute to Cassel), was warning would lead to catastrophe.

It is true that Cassel also believed that over the long run not enough gold was being produced to avoid deflation. White and Hogan spend inordinate space and attention on that issue, because that secular tendency toward deflation is entirely different from the catastrophic effects of the increase in gold demand in the late 1920s triggered by the insane policy of the Bank of France.

The US could have mitigated the effects if it had been willing to accommodate the Bank of France’s demand to increase its gold holdings. Of course, mitigating the effects of the insane policy of the Bank of France would have rewarded the French for their catastrophic policy, but, under the circumstances, some other means of addressing French misconduct would have spared the world incalculable suffering. But misled by an inordinate fear of stock market speculation, the Fed tightened policy in 1928-29 and began accumulating gold rather than accommodate the French demand.

And the Depression came.

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

My Paper “Hayek, Hicks, Radner and Four Equilibrium Concepts” Is Now Available Online.

The paper, forthcoming in The Review of Austrian Economics, can be read online.

Here is the abstract:

Hayek was among the first to realize that for intertemporal equilibrium to obtain all agents must have correct expectations of future prices. Before comparing four categories of intertemporal, the paper explains Hayek’s distinction between correct expectations and perfect foresight. The four equilibrium concepts considered are: (1) Perfect foresight equilibrium of which the Arrow-Debreu-McKenzie (ADM) model of equilibrium with complete markets is an alternative version, (2) Radner’s sequential equilibrium with incomplete markets, (3) Hicks’s temporary equilibrium, as extended by Bliss; (4) the Muth rational-expectations equilibrium as extended by Lucas into macroeconomics. While Hayek’s understanding closely resembles Radner’s sequential equilibrium, described by Radner as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium seems to have been the natural extension of Hayek’s approach. The now dominant Lucas rational-expectations equilibrium misconceives intertemporal equilibrium, suppressing Hayek’s insights thereby retreating to a sterile perfect-foresight equilibrium.

And here is my concluding paragraph:

Four score and three years after Hayek explained how challenging the subtleties of the notion of intertemporal equilibrium and the elusiveness of any theoretical account of an empirical tendency toward intertemporal equilibrium, modern macroeconomics has now built a formidable theoretical apparatus founded on a methodological principle that rejects all the concerns that Hayek found so vexing denies that all those difficulties even exist. Many macroeconomists feel proud of what modern macroeconomics has achieved, but there is reason to think that the path trod by Hayek, Hicks and Radner could have led macroeconomics in a more fruitful direction than the one on which it has been led by Lucas and his associates.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,233 other followers
Follow Uneasy Money on WordPress.com