Archive for the 'coordination failure' Category

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Traffic Jams and Multipliers

Since my previous post which I closed by quoting the abstract of Brian Arthur’s paper “Complexity Economics: A Different Framework for Economic Thought,” I have been reading his paper and some of the papers he cites, especially Magda Fontana’s paper “The Santa Fe Perspective on Economics: Emerging Patterns in the Science of Complexity,” and Mark Blaug’s paper “The Formalist Revolution of the 1950s.” The papers bring together a number of themes that I have been emphasizing in previous posts on what I consider the misguided focus of modern macroeconomics on rational-expectations equilibrium as the organizing principle of macroeconomic theory. Among these themes are the importance of coordination failures in explaining macroeconomic fluctuations, the inappropriateness of the full general-equilibrium paradigm in macroeconomics, the mistaken transformation of microfoundations from a theoretical problem to be solved into an absolute methodological requirement to be insisted upon (almost exactly analogous to the absurd transformation of the mind-body problem into a dogmatic insistence that the mind is merely a figment of our own imagination), or, stated another way, a recognition that macrofoundations are just as necessary for economics as microfoundations.

Let me quote again from Arthur’s essay; this time a beautiful passage which captures the interdependence between the micro and macro perspectives

To look at the economy, or areas within the economy, from a complexity viewpoint then would mean asking how it evolves, and this means examining in detail how individual agents’ behaviors together form some outcome and how this might in turn alter their behavior as a result. Complexity in other words asks how individual behaviors might react to the pattern they together create, and how that pattern would alter itself as a result. This is often a difficult question; we are asking how a process is created from the purposed actions of multiple agents. And so economics early in its history took a simpler approach, one more amenable to mathematical analysis. It asked not how agents’ behaviors would react to the aggregate patterns these created, but what behaviors (actions, strategies, expectations) would be upheld by — would be consistent with — the aggregate patterns these caused. It asked in other words what patterns would call for no changes in microbehavior, and would therefore be in stasis, or equilibrium. (General equilibrium theory thus asked what prices and quantities of goods produced and consumed would be consistent with — would pose no incentives for change to — the overall pattern of prices and quantities in the economy’s markets. Classical game theory asked what strategies, moves, or allocations would be consistent with — would be the best course of action for an agent (under some criterion) — given the strategies, moves, allocations his rivals might choose. And rational expectations economics asked what expectations would be consistent with — would on average be validated by — the outcomes these expectations together created.)

This equilibrium shortcut was a natural way to examine patterns in the economy and render them open to mathematical analysis. It was an understandable — even proper — way to push economics forward. And it achieved a great deal. Its central construct, general equilibrium theory, is not just mathematically elegant; in modeling the economy it re-composes it in our minds, gives us a way to picture it, a way to comprehend the economy in its wholeness. This is extremely valuable, and the same can be said for other equilibrium modelings: of the theory of the firm, of international trade, of financial markets.

But there has been a price for this equilibrium finesse. Economists have objected to it — to the neoclassical construction it has brought about — on the grounds that it posits an idealized, rationalized world that distorts reality, one whose underlying assumptions are often chosen for analytical convenience. I share these objections. Like many economists, I admire the beauty of the neoclassical economy; but for me the construct is too pure, too brittle — too bled of reality. It lives in a Platonic world of order, stasis, knowableness, and perfection. Absent from it is the ambiguous, the messy, the real. (pp. 2-3)

Later in the essay, Arthur provides a simple example of a non-equilibrium complex process: traffic flow.

A typical model would acknowledge that at close separation from cars in front, cars lower their speed, and at wide separation they raise it. A given high density of traffic of N cars per mile would imply a certain average separation, and cars would slow or accelerate to a speed that corresponds. Trivially, an equilibrium speed emerges, and if we were restricting solutions to equilibrium that is all we would see. But in practice at high density, a nonequilibrium phenomenon occurs. Some car may slow down — its driver may lose concentration or get distracted — and this might cause cars behind to slow down. This immediately compresses the flow, which causes further slowing of the cars behind. The compression propagates backwards, traffic backs up, and a jam emerges. In due course the jam clears. But notice three things. The phenomenon’s onset is spontaneous; each instance of it is unique in time of appearance, length of propagation, and time of clearing. It is therefore not easily captured by closed-form solutions, but best studied by probabilistic or statistical methods. Second, the phenomenon is temporal, it emerges or happens within time, and cannot appear if we insist on equilibrium. And third, the phenomenon occurs neither at the micro-level (individual car level) nor at the macro-level (overall flow on the road) but at a level in between — the meso-level. (p. 9)

This simple example provides an excellent insight into why macroeconomic reasoning can be led badly astray by focusing on the purely equilibrium relationships characterizing what we now think of as microfounded models. In arguing against the Keynesian multiplier analysis supposedly justifying increased government spending as a countercyclical tool, Robert Barro wrote the following in an unfortunate Wall Street Journal op-ed piece, which I have previously commented on here and here.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

In the disequilibrium environment of a recession, it is at least possible that injecting additional spending into the economy could produce effects that a similar injection of spending, under “normal” macro conditions, would not produce, just as somehow withdrawing a few cars from a congested road could increase the average speed of all the remaining cars on the road, by a much greater amount than would withdrawing a few cars from an uncongested road. In other words, microresponses may be sensitive to macroconditions.

Aggregate Demand and Coordination Failures

Regular readers of this blog may have noticed that I have been writing less about monetary policy and more about theory and methodology than when I started blogging a little over three years ago. Now one reason for that is that I’ve already said what I want to say about policy, and, since I get bored easily, I look for new things to write about. Another reason is that, at least in the US, the economy seems to have reached a sustainable growth path that seems likely to continue for the near to intermediate term. I think that monetary policy could be doing more to promote recovery, and I wish that it would, but unfortunately, the policy is what it is, and it will continue more or less in the way that Janet Yellen has been saying it will. Falling oil prices, because of increasing US oil output, suggest that growth may speed up slightly even as inflation stays low, possibly even falling to one percent or less. At least in the short-term, the fall in inflation does not seem like a cause for concern. A third reason for writing less about monetary policy is that I have been giving a lot of thought to what it is that I dislike about the current state of macroeconomics, and as I have been thinking about it, I have been writing about it.

In thinking about what I think is wrong with modern macroeconomics, I have been coming back again and again, though usually without explicit attribution, to an idea that was impressed upon me as an undergrad and grad student by Axel Leijonhufvud: that the main concern of macroeconomics ought to be with failures of coordination. A Swede, trained in the tradition of the Wicksellian Stockholm School, Leijonhufvud immersed himself in the study of the economics of Keynes and Keynesian economics, while also mastering the Austrian literature, and becoming an admirer of Hayek, especially Hayek’s seminal 1937 paper, “Economics and Knowledge.”

In discussing Keynes, Leijonhufvud focused on two kinds of coordination failures.

First, there is a problem in the labor market. If there is unemployment because the real wage is too high, an individual worker can’t solve the problem by offering to accept a reduced nominal wage. Suppose the price of output is $1 a unit and the wage is $10 a day, but the real wage consistent with full employment is $9 a day, meaning that producers choose to produce less output than they would produce if the real wage were lower, thus hiring fewer workers than they would if the real wage were lower than it is. If an individual worker offers to accept a wage of $9 a day, but other workers continue to hold out for $10 a day, it’s not clear that an employer would want to hire the worker who offers to work for $9 a day. If employers are not hiring additional workers because they can’t cover the cost of the additional output produced with the incremental revenue generated by the added output, the willingness of one worker to work for $9 a day is not likely to make a difference to the employer’s output and hiring decisions. It is not obvious what sequence of transactions would result in an increase in output and employment when the real wage is above the equilibrium level. There are complex feedback effects from a change, so that the net effect of making those changes in a piecemeal fashion is unpredictable, even though there is a possible full-employment equilibrium with a real wage of $9 a day. If the problem is that real wages in general are too high for full employment, the willingness of an individual worker to accept a reduced wage from a single employer does not fix the problem.

In the standard competitive model, there is a perfect market for every commodity in which every transactor is assumed to be able to buy and sell as much as he wants. But the standard competitive model has very little to say about the process by which those equilibrium prices are arrived at. And a typical worker is never faced with that kind of choice posited in the competitive model: an impersonal uniform wage at which he can decide how many hours a day or week or year he wants to work at that uniform wage. Under those circumstances, Keynes argued that the willingness of some workers to accept wage cuts in order to gain employment would not significantly increase employment, and might actually have destabilizing side-effects. Keynes tried to make this argument in the framework of an equilibrium model, though the nature of the argument, as Don Patinkin among others observed, was really better suited to a disequilibrium framework. Unfortunately, Keynes’s argument was subsequently dumbed down to a simple assertion that wages and prices are sticky (especially downward).

Second, there is an intertemporal problem, because the interest rate may be stuck at a rate too high to allow enough current investment to generate the full-employment level of spending given the current level of the money wage. In this scenario, unemployment isn’t caused by a real wage that is too high, so trying to fix it by wage adjustment would be a mistake. Since the source of the problem is the rate of interest, the way to fix the problem would be to reduce the rate of interest. But depending on the circumstances, there may be a coordination failure: bear speculators, expecting the rate of interest to rise when it falls to abnormally low levels, prevent the rate of interest from falling enough to induce enough investment to support full employment. Keynes put too much weight on bear speculators as the source of the intertemporal problem; Hawtrey’s notion of a credit deadlock would actually have been a better way to go, and nowadays, when people speak about a Keynesian liquidity trap, what they really have in mind is something closer to Hawtreyan credit deadlock than to the Keynesian liquidity trap.

Keynes surely deserves credit for identifying and explaining two possible sources of coordination failures, failures affecting the macroeconomy, because interest rates and wages, though they actually come in many different shapes and sizes, affect all markets and are true macroeconomic variables. But Keynes’s analysis of those coordination failures was far from being fully satisfactory, which is not surprising; a theoretical pioneer rarely provides a fully satisfactory analysis, leaving lots of work for successors.

But I think that Keynes’s theoretical paradigm actually did lead macroeconomics in the wrong direction, in the direction of a highly aggregated model with a single output, a bond, a medium of exchange, and a labor market, with no explicit characterization of the production technology. (I.e., is there one factor or two, and if two how is the price of the second factor determined? See, here, here, here, and here my discussion of Earl Thompson’s “A Reformulation of Macroeconomic Theory,” which I hope at some point to revisit and continue.)

Why was it the wrong direction? Because, the Keynesian model (both Keynes’s own version and the Hicksian IS-LM version of his model) ruled out the sort of coordination problems that might arise in a multi-product, multi-factor, intertemporal model in which total output depends in a meaningful way on the meshing of the interdependent plans, independently formulated by decentralized decision-makers, contingent on possibly inconsistent expectations of the future. In the over-simplified and over-aggregated Keynesian model, the essence of the coordination problem has been assumed away, leaving only a residue of the actual problem to be addressed by the model. The focus of the model is on aggregate expenditure, income, and output flows, with no attention paid to the truly daunting task of achieving sufficient coordination among the independent decision makers to allow total output and income to closely approximate the maximum sustainable output and income that the system could generate in a perfectly coordinated state, aka full intertemporal equilibrium.

This way of thinking about macroeconomics led to the merging of macroeconomics with neoclassical growth theory and to the routine and unthinking incorporation of aggregate production functions in macroeconomic models, a practice that is strictly justified only in a single-output, two-factor model in which the value of capital is independent of the rate of interest, so that the havoc-producing effects of reswitching and capital-reversal can be avoided. Eventually, these models were taken over by modern real-business-cycle theorists, who dogmatically rule out any consideration of coordination problems, while attributing all observed output and employment fluctuations to random productivity shocks. If one thinks of macroeconomics as an attempt to understand coordination failures, the RBC explanation of output and employment fluctuations is totally backwards; productivity fluctuations, like fluctuations in output and employment, are the not the results of unexplained random disturbances, they are the symptoms of coordination failures. That’s it, eureka! Solve the problem by assuming that it does not exist.

If you are thinking that this seems like an Austrian critique of the Keynesian model or the Keynesian approach, you are right; it is an Austrian critique. But it has nothing to do with stereotypical Austrian policy negativism; it is a critique of the oversimplified structure of the Keynesian model, which foreshadowed the reduction ad absurdum or modern real-business-cycle theory, which has nearly banished the idea of coordination failures from modern macroeconomics. The critique is not about the lack of a roundabout capital structure; it is about the narrow scope for inconsistencies in production and consumption plans.

I think that Leijonhufvud almost 40 years ago was getting at this point when he wrote the following paragraph near toward end of his book on Keynes.

The unclear mix of statics and dynamics [in the General Theory] would seem to be main reason for later muddles. One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tools, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation from the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of simple, “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary exchange-cum production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some step of past developments in order to get on the right track – and that is probably advisable – my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound that Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (pp. 401-02)

I don’t think that we actually need to go back to Hayek, though “Economics and Knowledge” should certainly be read by every macroeconomist, but we do need to get a clearer understanding of the potential for breakdowns in economic activity to be caused by inconsistent expectations, especially when expectations are themselves mutually dependent and reinforcing. Because expectations are mutually interdependent, they are highly susceptible to network effects. Network effects produce tipping points, tipping points can lead to catastrophic outcomes. Just wanted to share that with you. Have a nice day.

Nick Rowe Teaches Us a Lot about Apples and Bananas

Last week I wrote a post responding to a post by Nick Rowe about money and coordination failures. Over the weekend, Nick posted a response to my post (and to one by Brad Delong). Nick’s latest post was all about apples and bananas. It was an interesting post, though for some reason – no doubt unrelated to its form or substance – I found the post difficult to read and think about. But having now read, and I think, understood (more or less), what Nick wrote, I confess to being somewhat underwhelmed. Let me try to explain why I don’t think that Nick has adequately addressed the point that I was raising.

That point being that while coordination failures can indeed be, and frequently are, the result of a monetary disturbance, one that creates an excess demand for money, thereby leading to a contraction of spending, and thus to a reduction of output and employment, it is also possible that a coordination failure can occur independently of a monetary disturbance, at least a disturbance that could be characterized as an excess demand for money that triggers a reduction in spending, income, output, and employment.

Without evaluating his reasoning, I will just restate key elements of Nick’s model – actually two parallel models. There are apple trees and banana trees, and people like to consume both apples and bananas. Some people own apple trees, and some people own banana trees. Owners of apple trees and owners of banana trees trade apples for bananas, so that they can consume a well-balanced diet of both apples and bananas. Oh, and there’s also some gold around. People like gold, but it’s not clear why. In one version of the model, people use it as a medium of exchange, selling bananas for gold and using gold to buy apples or selling apples for gold and using gold to buy bananas. In the other version of the model, people just barter apples for bananas. Nick then proceeds to show that if trade is conducted by barter, an increase in the demand for gold, does not affect the allocation of resources, because agents continue to trade apples for bananas to achieve the desired allocation, even if the value of gold is held fixed. However, if trade is mediated by gold, the increased demand for gold, with prices held fixed, implies corresponding excess supplies of both apples and bananas, preventing the optimal reallocation of apples and bananas through trade, which Nick characterizes as a recession. However, if there is a shift in demand from bananas to apples or vice versa, with prices fixed in either model, there will be an excess demand for bananas and an excess supply of apples (or vice versa). The outcome is suboptimal because Pareto-improving trade is prevented, but there is no recession in Nick’s view because the excess supply of one real good is exactly offset by an excess demand for the other real good. Finally, Nick considers a case in which there is trade in apple trees and banana trees. An increase in the demand for fruit trees, owing to a reduced rate of time preference, causes no problems in the barter model, because there is no impediment to trading apples for bananas. However, in the money model, the reduced rate of time preference causes an increase in the amount of gold people want to hold, the foregone interest from holding more having been reduced, which prevents optimal trade with prices held fixed.

Here are the conclusions that Nick draws from his two models.

Bottom line. My conclusions.

For the second shock (a change in preferences away from apples towards bananas), we get the same reduction in the volume of trade whether we are in a barter or a monetary economy. Monetary coordination failures play no role in this sort of “recession”. But would we call that a “recession”? Well, it doesn’t look like a normal recession, because there is an excess demand for bananas.

For both the first and third shocks, we get a reduction in the volume of trade in a monetary economy, and none in the barter economy. Monetary coordination failures play a decisive role in these sorts of recessions, even though the third shock that caused the recession was not a monetary shock. It was simply an increased demand for fruit trees, because agents became more patient. And these sorts of recessions do look like recessions, because there is an excess supply of both apples and bananas.

Or, to say the same thing another way: if we want to understand a decrease in output and employment caused by structural unemployment, monetary coordination failures don’t matter, and we can ignore money. Everything else is a monetary coordination failure. Even if the original shock was not a monetary shock, that non-monetary shock can cause a recession because it causes a monetary coordination failure.

Why am I underwhelmed by Nick’s conclusions? Well, it just seems that, WADR, he is making a really trivial point. I mean in a two-good world with essentially two representative agents, there is not really that much that can go wrong. To put this model through its limited endowment of possible disturbances, and to show that only an excess demand for money implies a “recession,” doesn’t seem to me to prove a great deal. And I was tempted to say that the main thing that it proves is how minimal is the contribution to macroeconomic understanding that can be derived from a two-good, two-agent model.

But, in fact, even within a two-good, two-agent model, it turns out there is room for a coordination problem, not considered by Nick, to occur. In his very astute comment on Nick’s post, Kevin Donoghue correctly pointed out that even trade between an apple grower and a banana grower depends on the expectations of each that the other will actually have what to sell in the next period. How much each one plants depends on his expectations of how much the other will plant. If neither expects the other to plant, the output of both will fall.

Commenting on an excellent paper by Backhouse and Laidler about the promising developments in macroeconomics that were cut short because of the IS-LM revolution, I made reference to a passage quoted by Backhouse and Laidler from Bjorn Hansson about the Stockholm School. It was the Stockholm School along with Hayek who really began to think deeply about the relationship between expectations and coordination failures. Keynes also thought about that, but didn’t grasp the point as deeply as did the Swedes and the Austrians. Sorry to quote myself, but it’s already late and I’m getting tired. I think the quote explains what I think is so lacking in a lot of modern macroeconomics, and, I am sorry to say, in Nick’s discussion of apples and bananas.

Backhouse and Laidler go on to cite the Stockholm School (of which Ohlin was a leading figure) as an example of explicitly dynamic analysis.

As Bjorn Hansson (1982) has shown, this group developed an explicit method, using the idea of a succession of “unit periods,” in which each period began with agents having plans based on newly formed expectations about the outcome of executing them, and ended with the economy in some new situation that was the outcome of executing them, and ended with the economy in some new situation that was the outcome of market processes set in motion by the incompatibility of those plans, and in which expectations had been reformulated, too, in the light of experience. They applied this method to the construction of a wide variety of what they called “model sequences,” many of which involved downward spirals in economic activity at whose very heart lay rising unemployment. This is not the place to discuss the vexed question of the extent to which some of this work anticipated the Keynesian multiplier process, but it should be noted that, in IS-LM, it is the limit to which such processes move, rather than the time path they follow to get there, that is emphasized.

The Stockholm method seems to me exactly the right way to explain business-cycle downturns. In normal times, there is a rough – certainly not perfect, but good enough — correspondence of expectations among agents. That correspondence of expectations implies that the individual plans contingent on those expectations will be more or less compatible with one another. Surprises happen; here and there people are disappointed and regret past decisions, but, on the whole, they are able to adjust as needed to muddle through. There is usually enough flexibility in a system to allow most people to adjust their plans in response to unforeseen circumstances, so that the disappointment of some expectations doesn’t become contagious, causing a systemic crisis.

But when there is some sort of major shock – and it can only be a shock if it is unforeseen – the system may not be able to adjust. Instead, the disappointment of expectations becomes contagious. If my customers aren’t able to sell their products, I may not be able to sell mine. Expectations are like networks. If there is a breakdown at some point in the network, the whole network may collapse or malfunction. Because expectations and plans fit together in interlocking networks, it is possible that even a disturbance at one point in the network can cascade over an increasingly wide group of agents, leading to something like a system-wide breakdown, a financial crisis or a depression.

But the “problem” with the Stockholm method was that it was open-ended. It could offer only “a wide variety” of “model sequences,” without specifying a determinate solution. It was just this gap in the Stockholm approach that Keynes was able to fill. He provided a determinate equilibrium, “the limit to which the Stockholm model sequences would move, rather than the time path they follow to get there.” A messy, but insightful, approach to explaining the phenomenon of downward spirals in economic activity coupled with rising unemployment was cast aside in favor of the neater, simpler approach of Keynes. No wonder Ohlin sounds annoyed in his comment, quoted by Backhouse and Laidler, about Keynes. Tractability trumped insight.

Unfortunately, that is still the case today. Open-ended models of the sort that the Stockholm School tried to develop still cannot compete with the RBC and DSGE models that have displaced IS-LM and now dominate modern macroeconomics. The basic idea that modern economies form networks, and that networks have properties that are not reducible to just the nodes forming them has yet to penetrate the trained intuition of modern macroeconomists. Otherwise, how would it have been possible to imagine that a macroeconomic model could consist of a single representative agent? And just because modern macroeconomists have expanded their models to include more than a single representative agent doesn’t mean that the intellectual gap evidenced by the introduction of representative-agent models into macroeconomic discourse has been closed.

Responding to Scott Sumner

Scott Sumner cites this passage from my previous post about coordination failures.

I can envision a pure barter economy with incorrect price expectations in which individual plans are in a state of discoordination. Or consider a Fisherian debt-deflation economy in which debts are denominated in terms of gold and gold is appreciating. Debtors restrict consumption not because they are trying to accumulate more cash but because their debt burden is so great, any income they earn is being transferred to their creditors. In a monetary economy suffering from debt deflation, one would certainly want to use monetary policy to alleviate the debt burden, but using monetary policy to alleviate the debt burden is different from using monetary policy to eliminate an excess demand for money. Where is the excess demand for money?

Evidently, Scott doesn’t quite find my argument that coordination failures are possible, even without an excess demand for money, persuasive. So he puts the following question to me.

Why is it different from alleviating an excess demand for money?

I suppose that my response is this is: I am not sure what the question means. Does Scott mean to say that he does not accept that in my examples there really is no excess demand for money? Or does he mean that the effects of the coordination failure are no different from what they would be if there were an excess demand for money, any deflationary problem being treatable by increasing the quantity of money, thereby creating an excess supply of money. If Scott’s question is the latter, then he might be saying that the two cases are observationally equivalent, so that my distinction between a coordination failure with an excess demand for money and a coordination failure without an excess demand for money is really not a difference worth making a fuss about. The first question raises an analytical issue; the second a pragmatic issue.

Scott continues:

As far as I know the demand for money is usually defined as either M/P or the Cambridge K.  In either case, a debt crisis might raise the demand for money, and cause a recession if the supply of money is fixed.  Or the Fed could adjust the supply of money to offset the change in the demand for money, and this would prevent any change in AD, P, and NGDP.

I don’t know what Scott means when he says that the demand for money is usually defined as M/P. M/P is a number of units of currency. The demand for money is some functional relationship between desired holdings of money and a list of variables that influence those desired holdings. To say that the demand for money is defined as M/P is to assert an identity between the amount of money demanded and the amount in existence which rules out an excess demand for money by definition, so now I am really confused. The Cambridge k expresses the demand for money in terms of a desired relationship between the amount of money held and nominal income. But again, I can’t tell whether Scott is thinking of k as a functional relationship that depends on a list of variables or as a definition in which case the existence of an excess demand for money is ruled out by definition. So I am still confused.

I agree that a debt crisis could raise the demand for money, but in my example, it is entirely plausible that, on balance, the demand for money to hold went down because debtors would have to use all their resources to pay the interest owed on their debts.

I don’t disagree that the Fed could engage in a monetary policy that would alleviate the debt burden, but the problem they would be addressing would not be an excess demand for money; the problem being addressed would be the debt burden. but under a gold clause inflation wouldn’t help because creditors would be protected from inflation by the requirement that they be repaid in terms of a constant gold value.

Scott concludes:

Perhaps David sees the debt crisis working through supply-side channels—causing a recession despite no change in NGDP.  That’s possible, but it’s not at all clear to me that this is what David has in mind.

The case I had in mind may or may not be associated with a change in NGDP, but any change in NGDP was not induced by an excess demand for money; it was induced by an increase in the value of gold when debts were denominated, as they were under the gold clause, in terms of gold.

I hope that this helps.

PS I see that Nick Rowe has a new post responding to my previous post. I have not yet read it. But it is near the top of my required reading list, so I hope to have a response for him in the next day or two.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,654 other followers

Follow Uneasy Money on WordPress.com