Archive for the 'Earl Thompson' Category

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

Krugman and Sumner on the Zero-Interest Lower Bound: Some History of Thought

UPDATE: Re-upping my post from July 8, 2011

I indicated in my first posting on Tuesday that I was going to comment on some recent comparisons between the current anemic recovery and earlier more robust recoveries since World War II. The comparison that I want to perform involves some simple econometrics, and it is taking longer than anticipated to iron out the little kinks that I keep finding. So I will have to put off that discussion a while longer. As a diversion, I will follow up on a point that Scott Sumner made in discussing Paul Krugman’s reasoning for having favored fiscal policy over monetary policy to lead us out of the recession.

Scott’s focus is on the factual question whether it is really true, as Krugman and Michael Woodford have claimed, that a monetary authority, like, say, the Bank of Japan, may simply be unable to create the inflation expectations necessary to achieve equilibrium, given the zero-interest-rate lower bound, when the equilibrium real interest rate is less than zero. Scott counters that a more plausible explanation for the inability of the Bank of Japan to escape from a liquidity trap is that its aversion to inflation is so well-known that it becomes rational for the public to expect that the Bank of Japan would not permit the inflation necessary for equilibrium.

It seems that a lot of people have trouble understanding the idea that there can be conditions in which inflation — or, to be more precise, expected inflation — is necessary for a recovery from a depression. We have become so used to thinking of inflation as a costly and disruptive aspect of economic life, that the notion that inflation may be an integral element of an economic equilibrium goes very deeply against the grain of our intuition.

The theoretical background of this point actually goes back to A. C. Pigou (another famous Cambridge economist, Alfred Marshall’s successor) who, in his 1936 review of Keynes’s General Theory, referred to what he called Mr. Keynes’s vision of the day of judgment, namely, a situation in which, because of depressed entrepreneurial profit expectations or a high propensity to save, macro-equilibrium (the equality of savings and investment) would correspond to a level of income and output below the level consistent with full employment.

The “classical” or “orthodox” remedy to such a situation was to reduce the rate of interest, or, as the British say “Bank Rate” (as in “Magna Carta” with no definite article) at which the Bank of England lends to its customers (mainly banks).  But if entrepreneurs are so pessimistic, or households so determined to save rather than consume, an equilibrium corresponding to a level of income and output consistent with full employment could, in Keynes’s ghastly vision, only come about with a negative interest rate. Now a zero interest rate in economics is a little bit like the speed of light in physics; all kinds of crazy things start to happen if you posit a negative interest rate and it seems inconsistent with the assumptions of rational behavior to assume that people would lend for a negative interest when they could simply hold the money already in their pockets. That’s why Pigou’s metaphor was so powerful. There are layers upon layers of interesting personal and historical dynamics lying beneath the surface of Pigou’s review of Keynes, but I won’t pursue that tangent here, tempting though it would be to go in that direction.

The conclusion that Keynes drew from his model is the one that we all were taught in our first course in macro and that Paul Krugman holds close to his heart, the government can come to the rescue by increasing its spending on whatever, thereby increasing aggregate demand, raising income and output up to the level consistent with full employment. But Pigou, whose own policy recommendations were not much different from those of Keynes, felt that Keynes had left out an important element of the model in his discussion. As a matter of logic, which to Pigou was as, or more important than, policy, an economy confronting Keynes’s day of judgment would not forever be stuck in “underemployment equilibrium” just because the rate of interest could not fall to the (negative) level required for full employment.

Rather, Pigou insisted, at least in theory, though not necessarily in practice, deflation, resulting from unemployed workers bidding down wages to gain employment, would raise the real value of the money supply (fixed in nominal terms in Keynes’s model) thereby generating a windfall to holders of money, inducing them to increase consumption, raising aggregate demand and eventually restoring full employment.  Discussion of the theoretical validity and policy relevance of what came to be known as the Pigou effect (or, occasionally, as the Pigou-Haberler Effect, or even the Pigou-Haberler-Scitovsky effect) became a really big deal in macroeconomics in the 1940s and 1950s and was still being taught in the 1960s and 1970s.

What seems remarkable to me now about that whole episode is that the analysis simply left out the possibility that the zero-interest-rate lower bound becomes irrelevant if the expected rate of inflation exceeds the putative negative equilibrium real interest rate that would hypothetically generate a macro-equilibrium at a level of income and output consistent with full employment.

If only Pigou had corrected the logic of Keynes’s model by positing an expected rate of inflation greater than the negative real interest rate rather than positing a process of deflation to increase the real value of the money stock, how different would the course of history and the development of macroeconomics and monetary theory have been.

One economist who did think about the expected rate of inflation as an equilibrating variable in a macroeconomic model was one of my teachers, the late, great Earl Thompson, who introduced the idea of an equilibrium rate of inflation in his remarkable unpublished paper, “A Reformulation of Macreconomic Theory.” If inflation is an equilibrating variable, then it cannot make sense for monetary authorities to commit themselves to a single unvarying target for the rate of inflation. Under certain circumstances, macroeconomic equilibrium may be incompatible with a rate of inflation below some minimum level. Has it occurred to the inflation hawks on the FOMC and their supporters that the minimum rate of inflation consistent with equilibrium is above the 2 percent rate that Fed has now set as its policy goal?

One final point, which I am still trying to work out more coherently, is that it really may not be appropriate to think of the real rate of interest and the expected rate of inflation as being determined independently of each other. They clearly interact. As I point out in my paper “The Fisher Effect Under Deflationary Expectations,” increasing the expected rate of inflation when the real rate of interest is very low or negative tends to increase not just the nominal rate, but the real rate as well, by generating the positive feedback effects on income and employment that result when a depressed economy starts to expand.

A Primer on Say’s Law and Walras’s Law

Say’s Law, often paraphrased as “supply creates its own demand,” is one of oldest “laws” in economics. It is also one of the least understood and most contentious propositions in economics. I am now in the process of revising my current draft of my paper “Say’s Law and the Classical Theory of Depressions,” which surveys and clarifies various interpretations, disputes and misunderstandings about Say’s Law. I thought that a brief update of my section discussing the relationship between Say’s Law and Walras’s Law might make for a useful blogpost. Not only does it discuss the meaning of Say’s Law and its relationship to Walras’s Law, it expands the narrow understanding of Say’s Law and corrects the mistaken view that Say’s Law does not hold in a monetary economy, because, given a demand to hold a pure medium of exchange, real goods may be supplied only to accumulate cash not to obtain real goods and services. IOW, supply may be a demand for cash not for goods. Under this interpretation, Say’s Law is valid only when the economy is in a macro or monetary equilibrium with no excess demand for money.

Here’s my discussion of that logically incorrect belief. (Let me add as a qualification that not only Say’s Law, but Walras’s Law, as I explained elsewhere in my paper, is not valid when there is not a complete set of forward and contingent markets. That’s because to prove Walras’s Law all agents must be optimizing on the same set of prices, whether actual observed prices or expected, but currently unobserved, prices. See also an earlier post about this paper in which I included the relevant excerpt from the paper.)

The argument that a demand to hold cash invalidates Say’s Law, because output may be produced for the purpose of accumulating cash rather than to buy other goods and services is an argument that had been made by nineteenth-century critics of Say’s Law. The argument did not go without response, but the nature and import of the response was not well, or widely, understood, and the criticism was widely credited. Thus, in his early writings on business-cycle theory, F. A. Hayek, making no claim to originality, maintained, matter of factly, that money involves a disconnect between aggregate supply and aggregate demand, describing money as a “loose joint” in the theory of general equilibrium, creating the central theoretical problem to be addressed by business-cycle theory. So, even Hayek in 1927 did not accept the validity of Say’s Law

Oskar Lange (“Say’s Law a Restatement and Criticism”) subsequently formalized the problem, introducing his distinction between Say’s Law and Walras’s Law. Lange defined Walras’s Law as the proposition that the sum of excess demands, corresponding to any price vector announced by a Walrasian auctioneer, must identically equal zero.[1] In a barter model, individual optimization, subject to the budget constraint corresponding to a given price vector, implies that the value of the planned purchases and planned sales by each agent must be exactly equal; if the value of the excess demands of each individual agent is zero the sum of the values of the excess demands of all individuals must also be zero. In a barter model, Walras’s Law and Say’s Law are equivalent: demand is always sufficient to absorb supply.

But in a model in which agents hold cash, which they use when transacting, they may supply real goods in order to add to their cash holdings. Because individual agents may seek to change their cash holdings, Lange argued that the equivalence between Walras’s Law and Say’s Law in a barter model does not carry over to a model in which agents hold money. Say’s Law cannot hold in such an economy unless excess demands in the markets for real goods sum to zero. But if agents all wish to add to their holdings of cash, their excess demand for cash will be offset by an excess supply of goods, which is precisely what Say’s Law denies.

It is only when an equilibrium price vector is found at which the excess demand in each market is zero that Say’s Law is satisfied. Say’s Law, according to Lange, is a property of a general equilibrium, not a necessary property of rational economic conduct, as Say and his contemporaries and followers had argued. When our model is extended from a barter to a monetary setting, Say’s Law must be restated in the generalized form of Walras’s Law. But, unlike Say’s Law, Walras’s Law does not exclude the possibility of an aggregate excess supply of all goods. Aggregate demand can be deficient, and it can result in involuntary unemployment.

At bottom, this critique of Say’s Law depends on the assumption that the quantity of money is exogenously fixed, so that individuals can increase or decrease their holdings of money only by spending either less or more than their incomes. However, as noted above, if there is a market mechanism that allows an increased demand for cash balances to elicit an increased quantity of cash balances, so that the public need not reduce expenditures to finance additions to their holdings of cash, Lange’s critique may not invalidate Say’s Law.

A competitive monetary system based on convertibility into gold or some other asset[2] has precisely this property. In particular, with money privately supplied by a set of traders (let’s call them banks), money is created when a bank accepts a money-backing asset (IOU) supplied by a customer in exchange for issuing its liability (a banknote or a deposit), which is widely acceptable as a medium of exchange. As first pointed out by Thompson (1974), Lange’s analytical oversight was to assume that in a Walrasian model with n real goods and money, there are only (n+1) goods or assets. In fact, there are really (n+2) goods or assets; there are n real goods and two monetary assets (i.e., the money issued by the bank and the money-backing asset accepted by the bank in exchange for the money that it issues). Thus, an excess demand for money need not, as Lange assumed, be associated with, or offset by, an excess supply of real commodities; it may be offset by a supply of money-backing assets supplied by those seeking to increase their cash holdings.

Properly specifying the monetary model relevant to macroeconomic analysis eliminates a misconception that afflicted monetary and macroeconomic theory for a very long time, and provides a limited rehabilitation of Say’s Law. But that rehabilitation doesn’t mean that all would be well if we got rid of central banks, abandoned all countercyclical policies and let private banks operate without restrictions. None of those difficult and complicated questions can be answered by invoking or rejecting Say’s Law.

[1] Excess supplies are recorded as negative excess demands.

[2] The classical economists generally regarded gold or silver as the appropriate underlying asset into which privately issued monies would be convertible, but the possibility of a fiat standard was not rejected on analytical principle.

Milton Friedman’s Rabble-Rousing Case for Abolishing the Fed

I recently came across this excerpt from a longer interview of Milton Friedman conducted by Brian Lamb on Cspan in 1994. In this excerpt Lamb asks Friedman what he thinks of the Fed, and Friedman, barely able to contain his ideological fervor, quickly rattles off his version of the history of the Fed, blaming the Fed, at least by implication, for all the bad monetary and macroeconomic events that happened between 1914, when the Fed came into existence, and the1970s.

Here’s a rough summary of Friedman’s tirade:

I have long been in favor of abolishing [the Fed]. There is no institution in the United States that has such a high public standing and such a poor record of performance. . . . The Federal Reserve began operations in 1914 and presided over a doubling of prices during World War I. It produced a major collapse in 1921. It had a good period from about 1922 to 1928. It took actions in 1928 and 1929 that led to a major recession in 1929 and 1930, and it converted that recession by its actions into the Great Depression. The major villain in the Great Depression in my opinion was unquestionably the Federal Reserve System. Since that time, it presided over a doubling of price in World War II. It financed the inflation of the 1970s. On the whole it has a very poor record. It’s done far more harm than good.

Let’s go through Friedman’s complaints one at a time.

World War I inflation.

Friedman blames World War I inflation on the Fed. Friedman, as I have shown in many previous posts, had a very shaky understanding of how the gold standard worked. His remark about the Fed’s “presiding over a doubling of prices” during World War I is likely yet another example of Friedman’s incomprehension, though his use of the weasel words “presided over” rather than the straightforward “caused” does suggest that Friedman was merely trying to insinuate that the Fed was blameworthy when he actually understood that the Fed had almost no control over inflation in World War I, the US remaining formally on the gold standard until April 6, 1917, when the US declared war on Germany and entered World War I, formally suspending the convertibility of the dollar into gold.

As long as the US remained on a gold standard, the value of the dollar was determined by the value of gold. The US was importing lots of gold during the first two and a half years of the World War I as the belligerents used their gold reserves and demonetized their gold coins to finance imports of war material from the US. The massive demonetization of gold caused gold to depreciate on world markets. Another neutral country, Sweden, actually left the gold standard during World War I to avoid the inevitable inflation associated with the wartime depreciation of gold. So it was either ignorant or disingenuous for Friedman to attribute the World War I inflation to the actions of the Federal Reserve. No country could have remained on the gold standard during World War I without accepting inflation, and the Federal Reserve had no legal authority to abrogate or suspend the legal convertibility of the dollar into a fixed weight of gold.

The Post-War Collapse of 1921

Friedman correctly blames the 1921 collapse to the Fed. However, after a rapid wartime and postwar inflation, the US was trying to recreate a gold standard while holding 40% of the world’s gold reserves. The Fed therefore took steps to stabilize the value of gold, which meant raising interest rates, thereby inducing a further inflow of gold into the US to stop the real value of gold from falling in international markets. The problem was that the Fed went overboard, causing a really, and probably unnecessarily, steep deflation.

The Great Depression

Friedman is right that the Fed helped cause the Great Depression by its actions in 1928 and 1929, raising interest rates to try to quell rapidly rising stock prices. But the concerns about rising stock-market prices were probably misplaced, and the Fed’s raising of interest rates caused an inflow of gold into the US just when a gold outflow from the US was needed to accommodate the rising demand for gold on the part of the Bank of France and other central banks rejoining the gold standard and accumulating gold reserves. It was the sudden tightening of the world gold market, with the US and France and other countries rejoining the gold standard simultaneously trying to increase their gold holdings, that caused the value of gold to rise (and nominal prices to fall) in 1929 starting the Great Depression. Friedman totally ignored the international context in which the Fed was operating, failing to see that the US price level under the newly established gold standard, being determined by the international value of gold, was beyond the control of the Fed.

World War II Inflation

As with World War I, Friedman blamed the Fed for “presiding over” a doubling of prices in World War II. But unlike World War I, when rising US prices reflected a falling real value of gold caused by events outside the US and beyond the control of the Fed, in World War II rising US prices reflected the falling value of an inconvertible US dollar caused by Fed “money printing” at the behest of the President and the Treasury. But why did Friedman consider Fed money printing in World War II to have been a blameworthy act on the part of the Fed? The US was then engaged in a total war against the Axis powers. Under those circumstances, was the primary duty of the Fed to keep prices stable or to use its control over “printing press” to ensure that the US government had sufficient funds to win the war against Nazi totalitarianism and allied fascist forces, thereby preserving American liberties and values even more fundamental than keeping inflation low and enabling creditors to extract what was owed to them by their debtors in dollars of undiminished real purchasing power.

Now it’s true that many of Friedman’s libertarian allies were appalled by US participation in World War II, but Friedman, to his credit, did not share their disapproval of US participation in World War II. But, given his support for World War II, Friedman should have at least acknowledged the obvious role of inflationary finance in emergency war financing, a role which, as Earl Thompson and I and others have argued, rationalizes the historic legal monopoly on money printing maintained by almost all sovereign states. To condemn the Fed for inflationary policies during World War II without recognizing the critical role of the “printing press” in war finance was a remarkably uninformed and biased judgment on Friedman’s part.

1970s Inflation

The Fed certainly had a major role in inflation during the 1970s, which as early as 1966 was already starting to creep up from 1-2% rates that had prevailed from 1953 to 1965. The rise in inflation was again triggered by war-related expenditures, owing to the growing combat role of the US in Vietnam starting in 1965. The Fed’s role in rising inflation in the late 1960s and early 1970s was hardly the Fed’s finest hour, but again, it is unrealistic to expect a public institution like the Fed to withhold the financing necessary to support a military action undertaken by the national government. Certainly, the role of Arthur Burns, appointed by Nixon in 1970 to become Fed Chairman in encouraging Nixon to impose wage-and-price controls as an anti-inflationary measure was one of the most disreputable chapters in the Fed’s history, and the cluelessness of Carter’s first Fed Chairman, G. William Miller, appointed to succeed Burns, is almost legendary, but given the huge oil-price increases of 1973-74 and 1978-79, a policy of accommodating those supply-side shocks by allowing a temporary increase in inflation was probably optimal. So, given the difficult circumstances under which the Fed was operating, the increased inflation of the 1970s was not entirely undesirable.

But although Friedman was often sensitive to the subtleties and nuances of policy making when rendering scholarly historical and empirical judgments, he rarely allowed subtleties and nuances to encroach on his denunciations when he was operating in full rabble-rousing mode.

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.

What’s Wrong with the Price-Specie-Flow Mechanism? Part I

The tortured intellectual history of the price-specie-flow mechanism (PSFM), which received its classic exposition in an essay (“Of the Balance of Trade”) by David Hume about 275 years ago is not a history that, properly understood, provides solid grounds for optimism about the chances for progress in what we, somewhat credulously, call economic science. In brief, the price-specie-flow mechanism asserts that, under a gold or commodity standard, deviations between the price levels of those countries on the gold standard induce gold to be shipped from countries where prices are relatively high to countries where prices are relatively low, the gold flows continuing until price levels are equalized. Hence, the compound adjective “price-specie-flow,” signifying that the mechanism is set in motion by price-level differences that induce gold (specie) flows.

The PSFM is thus premised on a version of the quantity theory of money in which price levels in each country on the gold standard are determined by the quantity of money circulating in that country. In his account, Hume assumed that money consists entirely of gold, so that he could present a scenario of disturbance and re-equilibration strictly in terms of changes in the amount of gold circulating in each country. Inasmuch as Hume held a deeply hostile attitude toward banks, believing them to be essentially inflationary engines of financial disorder, subsequent interpretations of the PSFM had to struggle to formulate a more general theoretical account of international monetary adjustment to accommodate the presence of the fractional-reserve banking so detested by Hume and to devise an institutional framework that would facilitate operation of the adjustment mechanism under a fractional-reserve-banking system.

In previous posts on this blog (e.g., here, here and here) a recent article on the history of the (misconceived) distinction between rules and discretion, I’ve discussed the role played by the PSFM in one not very successful attempt at monetary reform, the English Bank Charter Act of 1844. The Bank Charter Act was intended to ensure the maintenance of monetary equilibrium by reforming the English banking system so that it would operate the way Hume described it in his account of the PSFM. However, despite the failings of the Bank Charter Act, the general confusion about monetary theory and policy that has beset economic theory for over two centuries has allowed PSFM to retain an almost canonical status, so that it continues to be widely regarded as the basic positive and normative model of how the classical gold standard operated. Using the PSFM as their normative model, monetary “experts” came up with the idea that, in countries with gold inflows, monetary authorities should reduce interest rates (i.e., lending rates to the banking system) causing monetary expansion through the banking system, and, in countries losing gold, the monetary authorities should do the opposite. These vague maxims described as the “rules of the game,” gave only directional guidance about how to respond to an increase or decrease in gold reserves, thereby avoiding the strict numerical rules, and resulting financial malfunctions, prescribed by the Bank Charter Act.

In his 1932 defense of the insane gold-accumulation policy of the Bank of France, Hayek posited an interpretation of what the rules of the game required that oddly mirrored the strict numerical rules of the Bank Charter Act, insisting that, having increased the quantity of banknotes by about as much its gold reserves had increased after restoration of the gold convertibility of the franc, the Bank of France had done all that the “rules of the game” required it to do. In fairness to Hayek, I should note that decades after his misguided defense of the Bank of France, he was sharply critical of the Bank Charter Act. At any rate, the episode indicates how indefinite the “rules of the game” actually were as a guide to policy. And, for that reason alone, it is not surprising that evidence that the rules of the game were followed during the heyday of the gold standard (roughly 1880 to 1914) is so meager. But the main reason for the lack of evidence that the rules of the game were actually followed is that the PSFM, whose implementation the rules of the game were supposed to guarantee, was a theoretically flawed misrepresentation of the international-adjustment mechanism under the gold standard.

Until my second year of graduate school (1971-72), I had accepted the PSFM as a straightforward implication of the quantity theory of money, endorsed by such luminaries as Hayek, Friedman and Jacob Viner. I had taken Axel Leijonhufvud’s graduate macro class in my first year, so in my second year I audited Earl Thompson’s graduate macro class in which he expounded his own unique approach to macroeconomics. One of the first eye-opening arguments that Thompson made was to deny that the quantity theory of money is relevant to an economy on the gold standard, the kind of economy (allowing for silver and bimetallic standards as well) that classical economics, for the most part, dealt with. It was only after the Great Depression that fiat money was widely accepted as a viable system for the long-term rather than a mere temporary wartime expedient.

What determines the price level for a gold-standard economy? Thompson’s argument was simple. The value of gold is determined relative to every other good in the economy by exactly the same forces of supply and demand that determine relative prices for every other real good. If gold is the standard, or numeraire, in terms of which all prices are quoted, then the nominal price of gold is one (the relative price of gold in terms of itself). A unit of currency is specified as a certain quantity of gold, so the price level measure in terms of the currency unit varies inversely with the value of gold. The amount of money in such an economy will correspond to the amount of gold, or, more precisely, to the amount of gold that people want to devote to monetary, as opposed to real (non-monetary), uses. But financial intermediaries (banks) will offer to exchange IOUs convertible on demand into gold for IOUs of individual agents. The IOUs of banks have the property that they are accepted in exchange, unlike the IOUs of individual agents which are not accepted in exchange (not strictly true as bills of exchange have in the past been widely accepted in exchange). Thus, the amount of money (IOUs payable on demand) issued by the banking system depends on how much money, given the value of gold, the public wants to hold; whenever people want to hold more money than they have on hand, they obtain additional money by exchanging their own IOUs – not accepted in payment — with a bank for a corresponding amount of the bank’s IOUs – which are accepted in payment.

Thus, the simple monetary theory that corresponds to a gold standard starts with a value of gold determined by real factors. Given the public’s demand to hold money, the banking system supplies whatever quantity of money is demanded by the public at a price level corresponding to the real value of gold. This monetary theory is a theory of an ideal banking system producing a competitive supply of money. It is the basic monetary paradigm of Adam Smith and a significant group of subsequent monetary theorists who formed the Banking School (and also the Free Banking School) that opposed the Currency School doctrine that provided the rationale for the Bank Charter Act. The model is highly simplified and based on assumptions that aren’t necessarily fulfilled always or even at all in the real world. The same qualification applies to all economic models, but the realism of the monetary model is certainly open to question.

So under the ideal gold-standard model described by Thompson, what was the mechanism of international monetary adjustment? All countries on the gold standard shared a common price level, because, under competitive conditions, prices for any tradable good at any two points in space can deviate by no more than the cost of transporting that product from one point to the other. If geographic price differences are constrained by transportation costs, then the price effects of an increased quantity of gold at any location cannot be confined to prices at that location; arbitrage spreads the price effect at one location across the whole world. So the basic premise underlying the PSFM — that price differences across space resulting from any disturbance to the equilibrium distribution of gold would trigger equilibrating gold shipments to equalize prices — is untenable; price differences between any two points are always constrained by the cost of transportation between those points, whatever the geographic distribution of gold happens to be.

Aside from the theoretical point that there is a single world price level – actually it’s more correct to call it a price band reflecting the range of local price differences consistent with arbitrage — that exists under the gold standard, so that the idea that local prices vary in proportion to the local money stock is inconsistent with standard price theory, Thompson also provided an empirical refutation of the PSFM. According to the PSFM, when gold is flowing into one country and out of another, the price levels in the two countries should move in opposite directions. But the evidence shows that price-level changes in gold-standard countries were highly correlated even when gold flows were in the opposite direction. Similarly, if PSFM were correct, cyclical changes in output and employment should have been correlated with gold flows, but no such correlation between cyclical movements and gold flows is observed in the data. It was on this theoretical foundation that Thompson built a novel — except that Hawtrey and Cassel had anticipated him by about 50 years — interpretation of the Great Depression as a deflationary episode caused by a massive increase in the demand for gold between 1929 and 1933, in contrast to Milton Friedman’s narrative that explained the Great Depression in terms of massive contraction in the US money stock between 1929 and 1933.

Thompson’s ideas about the gold standard, which he had been working on for years before I encountered them, were in the air, and it wasn’t long before I encountered them in the work of Harry Johnson, Bob Mundell, Jacob Frenkel and others at the University of Chicago who were then developing what came to be known as the monetary approach to the balance of payments. Not long after leaving UCLA in 1976 for my first teaching job, I picked up a volume edited by Johnson and Frenkel with the catchy title The Monetary Approach to the Balance of Payments. I studied many of the papers in the volume, but only two made a lasting impression, the first by Johnson and Frenkel “The Monetary Approach to the Balance of Payments: Essential Concepts and Historical Origins,” and the last by McCloskey and Zecher, “How the Gold Standard Really Worked.” Reinforcing what I had learned from Thompson, the papers provided a deeper understanding of the relevant history of thought on the international-monetary-adjustment  mechanism, and the important empirical and historical evidence that contradicts the PSFM. I also owe my interest in Hawtrey to the Johnson and Frenkel paper which cites Hawtrey repeatedly for many of the basic concepts of the monetary approach, especially the existence of a single arbitrage-constrained international price level under the gold standard.

When I attended the History of Economics Society Meeting in Toronto a couple of weeks ago, I had the  pleasure of meeting Deirdre McCloskey for the first time. Anticipating that we would have a chance to chat, I reread the 1976 paper in the Johnson and Frenkel volume and a follow-up paper by McCloskey and Zecher (“The Success of Purchasing Power Parity: Historical Evidence and Its Implications for Macroeconomics“) that appeared in a volume edited by Michael Bordo and Anna Schwartz, A Retrospective on the Classical Gold Standard. We did have a chance to chat and she did attend the session at which I talked about Friedman and the gold standard, but regrettably the chat was not a long one, so I am going to try to keep the conversation going with this post, and the next one in which I will discuss the two McCloskey and Zecher papers and especially the printed comment to the later paper that Milton Friedman presented at the conference for which the paper was written. So stay tuned.

PS Here is are links to Thompson’s essential papers on monetary theory, “The Theory of Money and Income Consistent with Orthodox Value Theory” and “A Reformulation of Macroeconomic Theory” about which I have written several posts in the past. And here is a link to my paper “A Reinterpretation of Classical Monetary Theory” showing that Earl’s ideas actually captured much of what classical monetary theory was all about.

Samuelson Rules the Seas

I think Nick Rowe is a great economist; I really do. And on top of that, he recently has shown himself to be a very brave economist, fearlessly claiming to have shown that Paul Samuelson’s classic 1980 takedown (“A Corrected Version of Hume’s Equilibrating Mechanisms for International Trade“) of David Hume’s classic 1752 articulation of the price-specie-flow mechanism (PSFM) (“Of the Balance of Trade“) was all wrong. Although I am a great admirer of Paul Samuelson, I am far from believing that he was error-free. But I would be very cautious about attributing an error in pure economic theory to Samuelson. So if you were placing bets, Nick would certainly be the longshot in this match-up.

Of course, I should admit that I am not an entirely disinterested observer of this engagement, because in the early 1970s, long before I discovered the Samuelson article that Nick is challenging, Earl Thompson had convinced me that Hume’s account of PSFM was all wrong, the international arbitrage of tradable-goods prices implying that gold movements between countries couldn’t cause the relative price levels of those countries in terms of gold to deviate from a common level, beyond the limits imposed by the operation of international commodity arbitrage. And Thompson’s reasoning was largely restated in the ensuing decade by Jacob Frenkel and Harry Johnson (“The Monetary Approach to the Balance of Payments: Essential Concepts and Historical Origins”) and by Donald McCloskey and Richard Zecher (“How the Gold Standard Really Worked”) both in the 1976 volume on The Monetary Approach to the Balance of Payments edited by Johnson and Frenkel, and by David Laidler in his essay “Adam Smith as a Monetary Economist,” explaining why in The Wealth of Nations Smith ignored his best friend Hume’s classic essay on PSFM. So the main point of Samuelson’s takedown of Hume and the PSFM was not even original. What was original about Samuelson’s classic article was his dismissal of the rationalization that PSFM applies when there are both non-tradable and tradable goods, so that national price levels can deviate from the common international price level in terms of tradables, showing that the inclusion of tradables into the analysis serves only to slow down the adjustment process after a gold-supply shock.

So let’s follow Nick in his daring quest to disprove Samuelson, and see where that leads us.

Assume that durable sailing ships are costly to build, but have low (or zero for simplicity) operating costs. Assume apples are the only tradeable good, and one ship can transport one apple per year across the English Channel between Britain and France (the only countries in the world). Let P be the price of apples in Britain, P* be the price of apples in France, and R be the annual rental of a ship, (all prices measured in gold), then R=ABS(P*-P).

I am sorry to report that Nick has not gotten off to a good start here. There cannot be only tradable good. It takes two tango and two to trade. If apples are being traded, they must be traded for something, and that something is something other than apples. And, just to avoid misunderstanding, let me say that that something is also something other than gold. Otherwise, there couldn’t possibly be a difference between the Thompson-Frenkel-Johnson-McCloskey-Zecher-Laidler-Samuelson critique of PSFM and the PSFM. We need at least three goods – two real goods plus gold – providing a relative price between the two real goods and two absolute prices quoted in terms of gold (the numeraire). So if there are at least two absolute prices, then Nick’s equation for the annual rental of a ship R must be rewritten as follows R=ABS[P(A)*-P(A)+P(SE)*-P(SE)], where P(A) is the price of apples in Britain, P(A)* is the price of apples in France, P(SE) is the price of something else in Britain, and P(SE)* is the price of that same something else in France.

OK, now back to Nick:

In this model, the Law of One Price (P=P*) will only hold if the volume of exports of apples (in either direction) is unconstrained by the existing stock of ships, so rentals on ships are driven to zero. But then no ships would be built to export apples if ship rentals were expected to be always zero, which is a contradiction of the Law of One Price because arbitrage is impossible without ships. But an existing stock of ships represents a sunk cost (sorry) and they keep on sailing even as rentals approach zero. They sail around Samuelson’s Iceberg model (sorry) of transport costs.

This is a peculiar result in two respects. First, it suggests, perhaps inadvertently, that the law of price requires equality between the prices of goods in every location when in fact it only requires that prices in different locations not differ by more than the cost of transportation. The second, more serious, peculiarity is that with only one good being traded the price difference in that single good between the two locations has to be sufficient to cover the cost of building the ship. That suggests that there has to be a very large price difference in that single good to justify building the ship, but in fact there are at least two goods being shipped, so it is the sum of the price differences of the two goods that must be sufficient to cover the cost of building the ship. The more tradable goods there are, the smaller the price differences in any single good necessary to cover the cost of building the ship.

Again, back to Nick:

Start with zero exports, zero ships, and P=P*. Then suppose, like Hume, that some of the gold in Britain magically disappears. (And unlike Hume, just to keep it simple, suppose that gold magically reappears in France.)

Uh-oh. Just to keep it simple? I don’t think so. To me, keeping it simple would mean looking at one change in initial conditions at a time. The one relevant change – the one discussed by Hume – is a reduction in the stock of gold in Britain. But Nick is looking at two changes — a reduced stock of gold in Britain and an increased stock of gold in France — simultaneously. Why does it matter? Because the key point at issue is whether a national price level – i.e, Britain’s — can deviate from the international price level. In Nick’s two-country example, there should be one national price level and one international price level, which means that the only price level subject to change as a result of the change in initial conditions should be, as in Hume’s example, the British price level, while the French price level – representing the international price level – remained constant. In a two-country model, this can only be made plausible by assuming that France is large compared to Britain, so that a loss of gold could potentially affect the British price level without changing the French price level. Once again back to Nick.

The price of apples in Britain drops, the price of apples in France rises, and so the rent on a ship is now positive because you can use it to export apples from Britain to France. If that rent is big enough, and expected to stay big long enough, some ships will be built, and Britain will export apples to France in exchange for gold. Gold will flow from France to Britain, so the stock of gold will slowly rise in Britain and slowly fall in France, and the price of apples will likewise slowly rise in Britain and fall in France, so ship rentals will slowly fall, and the price of ships (the Present Value of those rents) will eventually fall below the cost of production, so no new ships will be built. But the ships already built will keep on sailing until rentals fall to zero or they rot (whichever comes first).

So notice what Nick has done. Instead of confronting the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique of Hume, which asserts that a world price level determines the national price level, Nick has simply begged the question by not assuming that the world price of gold, which determines the world price level, is constant. Instead, he posits a decreased value of gold in France, owing to an increased French stock of gold, and an increased value of gold in Britain, owing to a decreased British stock of gold, and then conflating the resulting adjustment in the value gold with the operation of commodity arbitrage. Why Nick thinks his discussion is relevant to the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique escapes me.

The flow of exports and hence the flow of specie is limited by the stock of ships. And only a finite number of ships will be built. So we observe David Hume’s price-specie flow mechanism playing out in real time.

This bugs me. Because it’s all sorta obvious really.

Yes, it bugs me, too. And, yes, it is obvious. But why is it relevant to the question under discussion, which is whether there is an international price level in terms of gold that constrains movements in national price levels in countries in which gold is the numeraire. In other words, if there is a shock to the gold stock of a small open economy, how much will the price level in that small open economy change? By the percentage change in the stock of gold in that country – as Hume maintained – or by the minisicule percentage change in the international stock of gold, gold prices in the country that has lost gold being constrained from changing by more than allowed by the cost of arbitrage operations? Nick’s little example is simply orthogonal to the question under discussion.

I skip Nick’s little exegetical discussion of Hume’s essay and proceed to what I think is the final substantive point that Nick makes.

Prices don’t just arbitrage themselves. Even if we take the limit of my model, as the cost of building ships approaches zero, we need to explain what process ensures the Law of One Price holds in equilibrium. Suppose it didn’t…then people would buy low and sell high…..you know the rest.

There are different equilibrium conditions being confused here. The equilibrium arbitrage conditions are not same as the equilibrium conditions for international monetary equilibrium. Arbitrage conditions for individual commodities can hold even if the international distribution of gold is not in equilibrium. So I really don’t know what conclusion Nick is alluding to here.

But let me end on what I hope is a conciliatory and constructive note. As always, Nick is making an insightful argument, even if it is misplaced in the context of Hume and PSFM. And the upshot of Nick’s argument is that transportation costs are a function of the dispersion of prices, because, as the incentive to ship products to capture arbitrage profits increases, the cost of shipping will increase as arbitragers bid up the value of resources specialized to the processes of transporting stuff. So the assumption that the cost of transportation can be treated as a parameter is not really valid, which means that the constraints imposed on national price level movements are not really parametric, they are endongenously determined within an appropriately specified general equilibrium model. If Nick is willing to settle for that proposition, I don’t think that our positions are that far apart.

Helicopter Money and the Reflux Problem

Although I try not to seem overly self-confident or self-satisfied, I do give myself a bit of credit for being willing to admit my mistakes, of which I’ve made my share. So I am going to come straight out and admit it up front: I have not been reading Nick Rowe’s blog lately. Realizing my mistake, I recently looked up his posts for the past few months. Reading one of Nick’s posts is always an educational experience, teaching us how to think about an economic problem in the way that a good – I mean a really good — economist ought to think about the problem. I don’t always agree with Nick, but in trying to figure out whether I agree — and if not, why not — I always find that I have gained some fresh understanding of, or a deeper insight into, the problem than I had before. So in this post, I want to discuss a post that Nick wrote for his blog a couple of months ago on “helicopter money” and the law of reflux. Nick and I have argued about the law of reflux several times (see, e.g., here, here and here, and for those who just can’t get enough here is J. P. Koning’s take on Rowe v. Glasner) and I suspect that we still don’t see eye to eye on whether or under what circumstances the law of reflux has any validity. The key point that I have emphasized is that there is a difference in the way that commercial banks create money and the way that a central bank or a monetary authority creates money. In other words, I think that I hold a position somewhere in between Nick’s skepticism about the law of reflux and Mike Sproul’s unqualified affirmation of the law of reflux. So the truth is that I don’t totally disagree with what Nick writes about helicopter money. But I think it will help me and possibly people who read this post if I can explain where and why I take issue with what Nick has to say on the subject of helicopter money.

Nick begins his discussion with an extreme example in which people have a fixed and unchanging demand for money – one always needs to bear in mind that when economists speak about a demand for money they mean a demand to hold money in their wallets or their bank accounts. People will accept money in excess of their demand to hold money, but if the amount of money that they have in their wallets or in their bank accounts is more than desired, they don’t necessarily take immediate steps to get rid of their excess cash, though they will be more tolerant of excess cash in their bank accounts than in their wallets. So if central bank helicopters start bombarding the population with piles of new cash, those targeted will pick up the cash and put the cash in their wallets or deposit it into their bank accounts, but they won’t just keep the new cash in their wallets or their banks accounts permanently, because they will generally have better options for the superfluous cash than just leaving it in their wallets or their bank accounts. But what else can they do with their excess cash?

Well the usual story is that they spend the cash. But what do they spend it on? And the usual answer is that they buy stuff with the excess cash, causing a little consumption boom that either drives up prices of goods and services, or possibly, if wages and prices are “sticky,” causes total output to increase (at least temporarily unless the story starts from an initial condition of unemployed resources). And that’s what Nick seems to be suggesting in this passage.

If the central bank prints more currency, and drops it out of a helicopter, will the people refuse to pick it up, and leave the newly-printed notes lying on the sidewalk?

No. That’s silly. They will pick it up, and spend it. Each individual knows he can get rid of any excess money, even though it is impossible for individuals in the aggregate to get rid of excess money. What is true for each individual is false for the whole. It’s a fallacy of composition to assume otherwise.

But this version of the story is problematic for the same reason that early estimates of the multiplier in Keynesian models were vastly overstated. A one-time helicopter drop of money will be treated by most people as a windfall, not as a permanent increase in their income, so that it will not cause people to increase their spending on stuff except insofar as they expect their permanent income to have increased. So the main response of most people to the helicopter drop will be to make some adjustments in the composition of their balance sheets. People may use the cash to buy other income generating assets (including consumer durables), but they will hardly change their direct expenditures on present consumption.

So what else could people do with excess cash besides buying consumer durables? Well, they could buy real or financial assets (e.g., houses and paintings or bonds) driving up the value of those assets, but it is not clear why the value of those assets, which fundamentally reflect the expected future flows of real services or cash associated with those assets and the rates at which people discount future consumption relative to present consumption, is should be affected by an increase in the amount of cash that people happen to be holding at any particular moment in time. People could also use their cash to pay off debts, but that would just mean that the cash held by debtors would be transferred into the hands of their creditors. So the question what happens to the excess cash, and, if nothing happens to it, how the excess cash comes to be willingly held is not an easy question to answer.

Being the smart economist that he is, Nick understands the problem and he addresses it a few paragraphs below in a broader context in which people can put cash into savings accounts as well as spend it on stuff.

Now let me assume that the central bank also offers savings accounts, as well as issuing currency. Savings accounts may pay interest (at a rate set by the central bank), but cannot be used as a medium of exchange.

Start in equilibrium where the stock of currency is exactly $100 per person. What happens if the central bank prints more currency and drops it out of a helicopter, holding constant the nominal rate of interest it pays on savings accounts?

I know what you are thinking. I know how most economists would be thinking. (At least, I think I do.) “Aha! This time it’s different! Because now people can get rid of the excess currency, by depositing it in their savings accounts at the central bank, so Helicopter Money won’t work.” You are implicitly invoking the Law of Reflux to say that an excess supply of money must return to the bank that issued that money.

And you are thinking wrong. You are making exactly the same fallacy of composition as you would have been making if you said that people would leave the excess currency lying on the sidewalk.People in aggregate can only get rid of the excess currency by depositing it in their savings accounts (or throwing it away) therefore each individual will get rid of his excess currency by depositing it in his savings account (since it’s better than throwing it away).

There are 1,001 different ways an individual can get rid of excess currency, and depositing it in his savings account is only one of those 1,001 ways. Why should an individual care if depositing it in his savings account is the only way that works for the aggregate? (If people always thought like that, littering would never be a problem.) And if individuals do spend any portion of their excess currency, so that NGDP rises, and is expected to keep in rising, then the (assumed fixed) nominal interest rate offered on savings accounts at the central bank will start to look less attractive, and people will actually withdraw money from their savings accounts. Not because they want to hold extra currency, but because they plan to spend it.

There are indeed 1,001 ways that people could dispose of their excess cash balances, but how many of those 1,001 ways would be optimal under the assumptions of Nick’s little thought experiment? Not that many, because optimal spending decisions would be dictated by preferences for consumption over time, and there is no reason to assume that optimal spending plans would be significantly changed by the apparent, and not terribly large, wealth windfall associated with the helicopter drops. There could be some increase in purchases of assets like consumer durables, but one would expect that most of the windfall would be used to retire debt or to acquire interest-earning assets like central-bank deposits or their equivalent.

So, to be clear, I am not saying that Nick has it all wrong; I don’t deny that there could be some increase in expenditures on stuff; all I am saying is that in the standard optimizing models that we use, the implied effect on spending from an increase in cash holding seems to be pretty small.

Nick then goes on to bring commercial banks into his story.

The central bank issues currency, and also offers accounts at which central banks can keep “reserves”. People use both central bank currency and commercial bank chequing accounts as their media of exchange; commercial banks use their reserve accounts at the central bank as the medium of exchange they use for transactions between themselves. And the central bank allows commercial banks to swap currency for reserves in either direction, and reserves pay a nominal rate of interest set by the central bank.

My story now (as best as I can tell) matches the (implicit) model in “Helicopter Money: the Illusion of a Free Lunch” by Claudio Borio, Piti Disyatat, and Anna Zabai. (HT Giles Wilkes.) They argue that Helicopter Money will be unwanted and must Reflux to the central bank to be held as central bank reserves, where those reserves pay interest and so are just like (very short-term) government bonds, or savings accounts at the central bank. Their argument rests on a fallacy of composition. Individuals in aggregate can only get rid of unwanted currency that way, but this does not mean that individuals will choose to get rid of unwanted currency that way.

It seems to me that the effect that Nick is relying on is rather weak. If non-interest-bearing helicopter money can be costlessly converted into interest-bearing reserves at the central bank, then commercial banks will compete with each other to induce people with unwanted helicopter money in their pockets to convert the cash into interest-bearing deposits, so that the banks can pocket the interest on reserves. Competition will force the banks to share their interest income with depositors. Again, there may be some increase in spending on stuff associated with the helicopter drops, but it seems unlikely that it would be very large relative to the size of the drop.

It seems to me that the only way to answer the question how an excess supply of cash following a helicopter drop gets eliminated is to use the idea proposed by Earl Thompson over 40 years ago in his seminal, but unpublished, paper “A Reformulation of Macroeconomic Theory” which I have discussed in five posts (here, here, here, here and here) over the past four years. Even as I write this sentence, I feel a certain thrill of discovery in understanding more clearly than I ever have before the profound significance of Earl’s insight. The idea is simply this: in any intertemporal macroeconomic model, the expected rate of inflation, or the expected future price level, has to function, not as a parameter, but as an equilibrating variable. In any intertemporal macromodel, there will be a unique expected rate of inflation, or expected future price level, that is consistent with equilibrium. If actual expected inflation equals the equilibrium expected rate the economy may achieve its equilibrium, if the actual expected rate does not equal the equilibrium expected rate, the economy cannot reach equilibrium.

So if the monetary authority bombards its population with helicopter money, the economy will not reach equilibrium unless the expected rate of inflation of the public equals the rate of inflation (or the future price level) that is consistent with the amount of helicopter money being dropped by the monetary authority. But the fact that the expected rate of inflation is an equilibrating variable tells us nothing – absolutely nothing – about whether there is any economic mechanism whereby the equilibrium expectation of inflation is actually realized. The reason that the equilibrium value of expected inflation tells us nothing about the mechanism by which the equilibrium expected rate of inflation is achieved is that the mechanism does not exist. If it pleases you to say that rational expectations is such a mechanism, you are free to do so, but it should be obvious that the assertion that rational expectations ensures that the the actual expected rate of inflation is the equilibrium expected rate of inflation is nothing more than an exercise in question begging.

And it seem to me that, in explaining why helicopter drops are not nullified by reflux, Nick is implicitly relying on a change in inflation expectations as a reason why putting money into savings accounts will not eliminate the excess supply of cash. But it also seems to me that Nick is just saying that for equilibrium to be restored after a helicopter drop, inflation expectations have to change. Nothing I have said above should be understood to deny the possibility that inflation expectations could change as a result of a helicopter drop. In fact I think there is a strong likelihood that helicopter drops change inflation expectations. The point I am making is that we should be clear about whether we are making a contingent – potentially false — assertion about a causal relationship or making a logically necessary inference from given premises.

Thus, moving away from strictly logical reasoning, Nick makes an appeal to experience to argue that helicopter drops are effective.

We know, empirically, that helicopter money (in moderation of course) does not lead to bizarre consequences. Helicopter money is perfectly normal; central banks do it (almost) all the time. They print currency, the stock of currency grows over time, and since that currency pays no interest this is a profitable business for central banks and the governments that own them.

Ah yes, in the good old days before central banks started paying interest on reserves. After it became costless to hold money, helicopter drops aren’t what they used to be.

The demand for central bank currency seems to rise roughly in proportion to NGDP (the US is maybe an exception, since much is held abroad), so countries with rising NGDP are normally doing helicopter money. And doing helicopter money, just once, does not empirically lead to central banks being forced to set nominal interest rates at zero forever. And it would be utterly bizarre if it did; what else are governments supposed to do with the profits central banks earn from printing paper currency?

Why, of course! Give them to the banks by paying interest on reserves. Nick concludes with this thought.

The lesson we learn from all this is that the Law of Reflux will prevent Helicopter Money from working only if the central bank refuses to let NGDP rise at the same time. Which is like saying that pressing down on the gas pedal won’t work if you press the brake pedal down hard enough so the car can’t accelerate.

I would put it slightly differently. If the central bank engages in helicopter drops while simultaneously proclaiming that its inflation target is below the rate of inflation consistent with its helicopter drops, reflux may prevent helicopter drops from having any effect.

The Free Market Economy Is Awesome and Fragile

Scott Sumner’s three most recent posts (here, here, and here)have been really great, and I’ld like to comment on all of them. I will start with a comment on his post discussing whether the free market economy is stable; perhaps I will get around to the other two next week. Scott uses a 2009 paper by Robert Hetzel as the starting point for his discussion. Hetzel distinguishes between those who view the stabilizing properties of price adjustment as being overwhelmed by real instabilities reflecting fluctuations in consumer and entrepreneurial sentiment – waves of optimism and pessimism – and those who regard the economy as either perpetually in equilibrium (RBC theorists) or just usually in equilibrium (Monetarists) unless destabilized by monetary shocks. Scott classifies himself, along with Hetzel and Milton Friedman, in the latter category.

Scott then brings Paul Krugman into the mix:

Friedman, Hetzel, and I all share the view that the private economy is basically stable, unless disturbed by monetary shocks. Paul Krugman has criticized this view, and indeed accused Friedman of intellectual dishonesty, for claiming that the Fed caused the Great Depression. In Krugman’s view, the account in Friedman and Schwartz’s Monetary History suggests that the Depression was caused by an unstable private economy, which the Fed failed to rescue because of insufficiently interventionist monetary policies. He thinks Friedman was subtly distorting the message to make his broader libertarian ideology seem more appealing.

This is a tricky topic for me to handle, because my own view of what happened in the Great Depression is in one sense similar to Friedman’s – monetary policy, not some spontaneous collapse of the private economy, was what precipitated and prolonged the Great Depression – but Friedman had a partial, simplistic and distorted view of how and why monetary policy failed. And although I believe Friedman was correct to argue that the Great Depression did not prove that the free market economy is inherently unstable and requires comprehensive government intervention to keep it from collapsing, I think that his account of the Great Depression was to some extent informed by his belief that his own simple k-percent rule for monetary growth was a golden bullet that would ensure economic stability and high employment.

I’d like to first ask a basic question: Is this a distinction without a meaningful difference? There are actually two issues here. First, does the Fed always have the ability to stabilize the economy, or does the zero bound sometimes render their policies impotent?  In that case the two views clearly do differ. But the more interesting philosophical question occurs when not at the zero bound, which has been the case for all but one postwar recession. In that case, does it make more sense to say the Fed caused a recession, or failed to prevent it?

Here’s an analogy. Someone might claim that LeBron James is a very weak and frail life form, whose legs will cramp up during basketball games without frequent consumption of fluids. Another might suggest that James is a healthy and powerful athlete, who needs to drink plenty of fluids to perform at his best during basketball games. In a sense, both are describing the same underlying reality, albeit with very different framing techniques. Nonetheless, I think the second description is better. It is a more informative description of LeBron James’s physical condition, relative to average people.

By analogy, I believe the private economy in the US is far more likely to be stable with decent monetary policy than is the economy of Venezuela (which can fall into depression even with sufficiently expansionary monetary policy, or indeed overly expansionary policies.)

I like Scott’s LeBron James analogy, but I have two problems with it. First, although LeBron James is a great player, he’s not perfect. Sometimes, even he messes up. When he messes up, it may not be his fault, in the sense that, with better information or better foresight – say, a little more rest in the second quarter – he might have sunk the game-winning three-pointer at the buzzer. Second, it’s one thing to say that a monetary shock caused the Great Depression, but maybe we just don’t know how to avoid monetary shocks. LeBron can miss shots, so can the Fed. Milton Friedman certainly didn’t know how to avoid monetary shocks, because his pet k-percent rule, as F. A. Hayek shrewdly observed, was a simply a monetary shock waiting to happen. And John Taylor certainly doesn’t know how to avoid monetary shocks, because his pet rule would have caused the Fed to raise interest rates in 2011 with possibly devastating consequences. I agree that a nominal GDP level target would have resulted in a monetary policy superior to the policy the Fed has been conducting since 2008, but do I really know that? I am not sure that I do. The false promise held out by Friedman was that it is easy to get monetary policy right all the time. It certainly wasn’t the case for Friedman’s pet rule, and I don’t think that there is any monetary rule out there that we can be sure will keep us safe and secure and fully employed.

But going beyond the LeBron analogy, I would make a further point. We just have no theoretical basis for saying that the free-market economy is stable. We can prove that, under some assumptions – and it is, to say the least, debatable whether the assumptions could properly be described as reasonable – a model economy corresponding to the basic neoclassical paradigm can be solved for an equilibrium solution. The existence of an equilibrium solution means basically that the neoclassical model is logically coherent, not that it tells us much about how any actual economy works. The pieces of the puzzle could all be put together in a way so that everything fits, but that doesn’t mean that in practice there is any mechanism whereby that equilibrium is ever reached or even approximated.

The argument for the stability of the free market that we learn in our first course in economics, which shows us how price adjusts to balance supply and demand, is an argument that, when every market but one – well, actually two, but we don’t have to quibble about it – is already in equilibrium, price adjustment in the remaining market – if it is small relative to the rest of the economy – will bring that market into equilibrium as well. That’s what I mean when I refer to the macrofoundations of microeconomics. But when many markets are out of equilibrium, even the markets that seem to be equilibrium (with amounts supplied and demanded equal) are not necessarily in equilibrium, because the price adjustments in other markets will disturb the seeming equilibrium of the markets in which supply and demand are momentarily equal. So there is not necessarily any algorithm, either in theory or in practice, by which price adjustments in individual markets would ever lead the economy into a state of general equilibrium. If we believe that the free market economy is stable, our belief is therefore not derived from any theoretical proof of the stability of the free market economy, but simply on an intuition, and some sort of historical assessment that free markets tend to work well most of the time. I would just add that, in his seminal 1937 paper, “Economics and Knowledge,” F. A. Hayek actually made just that observation, though it is not an observation that he, or most of his followers – with the notable and telling exceptions of G. L. S. Shackle and Ludwig Lachmann – made a big fuss about.

Axel Leijonhufvud, who is certainly an admirer of Hayek, addresses the question of the stability of the free-market economy in terms of what he calls a corridor. If you think of an economy moving along a time path, and if you think of the time path that would be followed by the economy if it were operating at a full-employment equilibrium, Leijonjhufvud’s corridor hypothesis is that the actual time path of the economy tends to revert to the equilibrium time path as long as deviations from the equilibrium are kept within certain limits, those limits defining the corridor. However, if the economy, for whatever reasons (exogenous shocks or some other mishaps) leaves the corridor, the spontaneous equilibrating tendencies causing the actual time path to revert back to the equilibrium time path may break down, and there may be no further tendency for the economy to revert back to its equilibrium time path. And as I pointed out recently in my post on Earl Thompson’s “Reformulation of Macroeconomic Theory,” he was able to construct a purely neoclassical model with two potential equilibria, one of which was unstable so that a shock form the lower equilibrium would lead either to a reversion to the higher-level equilibrium or to downward spiral with no endogenous stopping point.

Having said all that, I still agree with Scott’s bottom line: if the economy is operating below full employment, and inflation and interest rates are low, there is very likely a problem with monetary policy.

Thompson’s Reformulation of Macroeconomic Theory, Part V: A Neoclassical Black Hole

It’s been over three years since I posted the fourth of my four previous installments in this series about Earl Thompson’s unpublished paper “A Reformulation of Macroeconomic Theory,” Thompson’s strictly neoclassical alternative to the standard Keynesian IS-LM model. Given the long hiatus, a short recapitulation seems in order.

The first installment was an introduction summarizing Thompson’s two main criticisms of the Keynesian model: 1) the disconnect between the standard neoclassical marginal productivity theory of production and factor pricing and the Keynesian assertion that labor receives a wage equal to its marginal product, thereby implying the existence of a second scarce factor of production (capital), but with the market for capital services replaced in the IS-LM model by the Keynesian expenditure functions, creating a potential inconsistency between the IS-LM model and a deep property of neoclassical theory; 2) the market for capital services having been excluded from the IS-LM model, the model lacks a variable that equilibrates the choice between holding money or real assets, so that the Keynesian investment function is incompletely specified, the Keynesian equilibrium condition for spending – equality between savings and investment – taking no account of the incentive for capital accumulation or the relationship, explicitly discussed by Keynes, between current investment and the (expected) future price level. Excluding the dependence of the equilibrium rate of spending on (expected) inflation from the IS-LM model renders the model logically incomplete.

The second installment was a discussion of the Hicksian temporary-equilibrium method used by Thompson to rationalize the existence of involuntary unemployment. For Thompson involuntary unemployment means unemployment caused by overly optimistic expectations by workers of wage offers, leading them to mistakenly set reservation wages too high. The key idea of advantage of the temporary-equilibrium method is that it reconciles the convention of allowing a market-clearing price to equilibrate supply and demand with the phenomenon of substantial involuntary unemployment in business-cycle downturns. Because workers have an incentive to withhold their services in order to engage in further job search or job training or leisure, their actual short-run supply of labor services in a given time period is highly elastic at the expected wage. If wage offers are below expectations, workers (mistakenly = involuntarily) choose unemployment, but given those mistaken expectations, the labor market is cleared with the observed wage equilibrating the demand for labor services and supply of labor services. There are clearly problems with this way of modeling the labor market, but it does provide an analytical technique that can account for cyclical fluctuations in unemployment within a standard microeconomic framework.

In the third installment, I showed how Thompson derived his FF curve, representing combinations of price levels and interest rates consistent with (temporary) equilibrium in both factor markets (labor services and capital services) and two versions of the LM curve, representing price levels and interest rates consistent with equilibrium in the money market. The two versions of the LM curve (analogous, but not identical, to the Keynesian LM curve) correspond to different monetary regimes. In what Thompson called the classical case, the price level is fixed by convertibility of output into cash at a fixed exchange rate, with money being supplied by a competitive banking system paying competitive interest on cash balances. The LM curve in this case is vertical at the fixed price level, with any nominal rate of interest being consistent with equilibrium in the money market, inasmuch as the amount of money demanded depends not on the nominal interest rate, but on the difference between the nominal interest rate and the competitively determined interest rate paid on cash. In the modern case, cash is non-interest bearing and supplied monopolistically by the monetary authority, so the LM curve is upward-sloping, with the cost of holding cash rising with the rate of interest, thereby reducing the amount of money demanded and increasing the price level for a given quantity of money supplied by the monetary authority. The solution of the model corresponds to the intersection of the FF and LM curves. For the classical case, the intersection is unique, but in the modern case since both curves are upward sloping, multiple intersections are possible.

The focus of the fourth installment was on setting up a model analogous to the Keynesian model by replacing the market for capital services excluded by Walras’s Law with something similar to the Keynesian expenditure functions (consumption, investment, government spending, etc.). The key point is that the FF and LM curves implicitly define a corresponding CC curve (shown in Figure 4 of the third installment) with the property that, at all points on the CC curve, the excess demand for (supply of) money exactly equals the excess supply of (demand for) labor. Thus, the CC curve represents a stock equilibrium in the market for commodities (i.e., a single consumption/capital good) rather than a flow rate of expenditure and income as represented by the conventional IS curve. But the inconsistency between the upward-sloping CC curve and the downward sloping IS curve reflects the underlying inconsistency between the neoclassical and the Keynesian paradigms.

In this installment, I am going to work through Thompson’s argument about the potential for an unstable equilibrium in the version of his model with an upward-sloping LM curve corresponding to the case in which non-interest bearing money is monopolistically supplied by a central bank. Thompson makes the argument using Figure 5, a phase diagram showing the potential equilibria for such an economy in terms of the FF curve (representing price levels and nominal interest rates consistent with equilibrium in the markets for labor and capital services) and the CC curve (representing price levels and nominal interest rates consistent with equilibrium in the output market).

Thompson_Figure5A phase diagram shows the direction of price adjustment when the economy is not in equilibrium (one of the two points of intersection between the FF and the CC curves). A disequilibrium implies a price change in response to an excess supply or excess demand in some market. All points above and to the left of the FF curve correspond to an excess supply of capital services, implying a falling nominal interest rate; points below and to the right of the FF curve correspond to excess demand for capital services, implying a rising interest rate. Points above and to the left of the CC curve correspond to an excess demand for output, implying a rising price level; points below and to the right of the CC curve correspond to an excess supply of output, implying a falling price level. Points in between the FF and CC curves correspond either to an excess demand for commodities and for capital services, implying a rising price level and a rising nominal interest rate (in the region between the two points of intersection – Eu and Es — between the CC and FF curves) or to an excess supply of both capital services and commodities, implying a falling interest rate and a falling price level (in the regions below the lower intersection Eu and above the upper intersection Es). The arrows in the diagram indicate the direction in which the price level and the nominal interest rate are changing at any point in the diagram.

Given the direction of price change corresponding to points off the CC and FF curves, the upper intersection is shown to be a stable equilibrium, while the lower intersection is unstable. Moreover, the instability corresponding to the lower intersection is very dangerous, because entering the region between the CC and FF curves below Eu means getting sucked into a vicious downward spiral of prices and interest rates that can only be prevented by a policy intervention to shift the CC curve to the right, either directly by way of increased government spending or tax cuts, or indirectly, through monetary policy aimed at raising the price level and expected inflation, shifting the LM curve, and thereby the CC curve, to the right. It’s like stepping off a cliff into a black hole.

Although I have a lot of reservations about the practical relevance of this model as an analytical tool for understanding cyclical fluctuations and counter-cyclical policy, which I plan to discuss in a future post, the model does resonate with me, and it does so especially after my recent posts about the representative-agent modeling strategy in New Classical economics (here, here, and here). Representative-agent models, I argued, are inherently unable to serve as analytical tools in macroeconomics, because their reductionist approach implies that all relevant decision making can be reduced to the optimization of a single agent, insulating the analysis from any interactions between decision-makers. But it is precisely the interaction effects between decision makers that create analytical problems that constitute the subject matter of the discipline or sub-discipline known as macroeconomics. That Robert Lucas has made it his life’s work to annihilate this field of study is a sad commentary on his contribution, Nobel Prize or no Nobel Prize, as an economic theorist.

That is one reason why I regard Thompson’s model, despite its oversimplifications, as important: it is constructed on a highly aggregated, yet strictly neoclassical, foundation, including continuous market-clearing, arriving at the remarkable conclusion that not only is there an unstable equilibrium, but it is at least possible for an economy in the neighborhood of the unstable equilibrium to be caught in a vicious downward deflationary spiral in which falling prices do not restore equilibrium but, instead, suck the economy into a zero-output black hole. That result seems to me to be a major conceptual breakthrough, showing that the strict rationality assumptions of neoclassical theory can lead to aoutcome that is totally at odds with the usual presumption that the standard neoclassical assumptions inevitably generate a unique stable equilibrium and render macroeconomics superfluous.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com