Archive for the 'Roy Radner' Category

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

My Paper “Hayek, Hicks, Radner and Four Equilibrium Concepts” Is Now Available Online.

The paper, forthcoming in The Review of Austrian Economics, can be read online.

Here is the abstract:

Hayek was among the first to realize that for intertemporal equilibrium to obtain all agents must have correct expectations of future prices. Before comparing four categories of intertemporal, the paper explains Hayek’s distinction between correct expectations and perfect foresight. The four equilibrium concepts considered are: (1) Perfect foresight equilibrium of which the Arrow-Debreu-McKenzie (ADM) model of equilibrium with complete markets is an alternative version, (2) Radner’s sequential equilibrium with incomplete markets, (3) Hicks’s temporary equilibrium, as extended by Bliss; (4) the Muth rational-expectations equilibrium as extended by Lucas into macroeconomics. While Hayek’s understanding closely resembles Radner’s sequential equilibrium, described by Radner as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium seems to have been the natural extension of Hayek’s approach. The now dominant Lucas rational-expectations equilibrium misconceives intertemporal equilibrium, suppressing Hayek’s insights thereby retreating to a sterile perfect-foresight equilibrium.

And here is my concluding paragraph:

Four score and three years after Hayek explained how challenging the subtleties of the notion of intertemporal equilibrium and the elusiveness of any theoretical account of an empirical tendency toward intertemporal equilibrium, modern macroeconomics has now built a formidable theoretical apparatus founded on a methodological principle that rejects all the concerns that Hayek found so vexing denies that all those difficulties even exist. Many macroeconomists feel proud of what modern macroeconomics has achieved, but there is reason to think that the path trod by Hayek, Hicks and Radner could have led macroeconomics in a more fruitful direction than the one on which it has been led by Lucas and his associates.

My Paper on Hayek, Hicks and Radner and 3 Equilibrium Concepts Now Available on SSRN

A little over a year ago, I posted a series of posts (here, here, here, here, and here) that came together as a paper (“Hayek and Three Equilibrium Concepts: Sequential, Temporary and Rational-Expectations”) that I presented at the History of Economics Society in Toronto in June 2017. After further revisions I posted the introductory section and the concluding section in April before presenting the paper at the Colloquium on Market Institutions and Economic Processes at NYU.

I have since been making further revisions and tweaks to the paper as well as adding the names of Hicks and Radner to the title, and I have just posted the current version on SSRN where it is available for download.

Here is the abstract:

Along with Erik Lindahl and Gunnar Myrdal, F. A. Hayek was among the first to realize that the necessary conditions for intertemporal, as opposed to stationary, equilibrium could be expressed in terms of correct expectations of future prices, often referred to as perfect foresight. Subsequently, J. R. Hicks further elaborated the concept of intertemporal equilibrium in Value and Capital in which he also developed the related concept of a temporary equilibrium in which future prices are not correctly foreseen. This paper attempts to compare three important subsequent developments of that idea with Hayek’s 1937 refinement of his original 1928 paper on intertemporal equilibrium. As a preliminary, the paper explains the significance of Hayek’s 1937 distinction between correct expectations and perfect foresight. In non-chronological order, the three developments of interest are: (1) Roy Radner’s model of sequential equilibrium with incomplete markets as an alternative to the Arrow-Debreu-McKenzie model of full equilibrium with complete markets; (2) Hicks’s temporary equilibrium model, and an important extension of that model by C. J. Bliss; (3) the Muth rational-expectations model and its illegitimate extension by Lucas from its original microeconomic application into macroeconomics. While Hayek’s 1937 treatment most closely resembles Radner’s sequential equilibrium model, which Radner, echoing Hayek, describes as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium model would seem to have been the natural development of Hayek’s approach. The now dominant Lucas rational-expectations approach misconceives intertemporal equilibrium and ignores the fundamental Hayekian insights about the meaning of intertemporal equilibrium.

Hayek, Radner and Rational-Expectations Equilibrium

In revising my paper on Hayek and Three Equilibrium Concepts, I have made some substantial changes to the last section which I originally posted last June. So I thought I would post my new updated version of the last section. The new version of the paper has not been submitted yet to a journal; I will give a talk about it at the colloquium on Economic Institutions and Market Processes at the NYU economics department next Monday. Depending on the reaction I get at the Colloquium and from some other people I will send the paper to, I may, or may not, post the new version on SSRN and submit to a journal.

In this section, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. It is noteworthy that in his discussions of intertemporal equilibrium, Roy Radner assigns a  meaning to the term “rational-expectations equilibrium” very different from the one normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents can make inferences about the beliefs of other agents when observed prices differ from the prices that the agents had expected. Agents attribute the differences between observed and expected prices to the superior information held by better-informed agents. As they assimilate the information that must have caused observed prices to deviate from their expectations, agents revise their own expectations accordingly, which, in turn, leads to further revisions in plans, expectations and outcomes.

There is a somewhat famous historical episode of inferring otherwise unknown or even secret information from publicly available data about prices. In 1954, one very rational agent, Armen Alchian, was able to identify which chemicals were being used in making the newly developed hydrogen bomb by looking for companies whose stock prices had risen too rapidly to be otherwise explained. Alchian, who spent almost his entire career at UCLA while moonlighting at the nearby Rand Corporation, wrote a paper at Rand listing the chemicals used in making the hydrogen bomb. When news of his unpublished paper reached officials at the Defense Department – the Rand Corporation (from whose files Daniel Ellsberg took the Pentagon Papers) having been started as a think tank with funding by the Department of Defense to do research on behalf of the U.S. military – the paper was confiscated from Alchian’s office at Rand and destroyed. (See Newhard’s paper for an account of the episode and a reconstruction of Alchian’s event study.)

But Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of future prices based on common knowledge. Radner’s result reinforces Hayek’s insight, upon which I remarked above, that although expectations are equilibrating variables there is no economic mechanism that tends to bring expectations toward their equilibrium values. There is no feedback mechanism, corresponding to the normal mechanism for adjusting market prices in response to perceived excess demands or supplies, that operates on price expectations. The heavy lifting of bringing expectations into correspondence with what the future holds must be done by the agents themselves; the magic of the market goes only so far.

Although Radner’s conception of rational expectations differs from the more commonly used meaning of the term, his conception helps us understand the limitations of the conventional “rational expectations” assumption in modern macroeconomics, which is that the price expectations formed by the agents populating a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is an important property of any model. If one assumes that the outcome expected by agents in a model is the equilibrium predicted by the model, then, under those expectations, the solution of the model ought to be the equilibrium of the model. If the solution of the model is somehow different from what agents in the model expect, then there is something really wrong with the model.

What kind of crazy model would have the property that correct expectations turn out not to be self-fulfilling? A model in which correct expectations are not self-fulfilling is a nonsensical model. But there is a huge difference between saying (a) that a model should have the property that correct expectations are self-fulfilling and saying (b) that the agents populating the model understand how the model works and, based know their knowledge of the model, form expectations of the equilibrium predicted by the model.

Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t credibly claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a methodological imperative. But the current sacrosanct status of rational expectations in modern macroeconomics has been achieved largely through methodological tyrannizing.

In his 1937 paper, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most faithful description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that, over time, expectations somehow do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth (1961), he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a particular market. The motivation for Muth to introduce the idea of a rational expectation was the cobweb-cycle model in which producers base current decisions about how much to produce for the following period on the currently observed price. But with a one-period time lag between production decisions and realized output, as is the case in agricultural markets in which the initial application of inputs does not result in output until a subsequent time period, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So, while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational-expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – one that Hayek understood better than perhaps anyone else — is that there is a difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. It is those subtle interactions that allow the kinds of informational inferences that, based on differences between expected and realized prices of the sort contemplated by Alchian and Radner, can sometimes be made. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

The key — but far from the only — error of the rational-expectations methodology that rules modern macroeconomics is that rational expectations somehow cause or bring about an intertemporal equilibrium. It is certainly a fact that people try very hard to use all the information available to them to predict what the future has in store, and any new bit of information not previously possessed will be rapidly assessed and assimilated and will inform a possibly revised set of expectations of the future. But there is no reason to think that this ongoing process of information gathering and processing and evaluation leads people to formulate correct expectations of the future or of future prices. Indeed, Radner proved that, even under strong assumptions, there is no necessity that the outcome of a process of information revision based on the observed differences between observed and expected prices leads to an equilibrium.

So it cannot be rational expectations that leads to equilibrium, On the contrary, rational expectations are a property of equilibrium. To speak of a “rational-expectations equilibrium” is to speak about a truism. There can be no rational expectations in the macroeconomic except in an equilibrium state, because correct expectations, as Hayek showed, is a defining characteristic of equilibrium. Outside of equilibrium, expectations cannot be rational. Failure to grasp that point is what led Morgenstern astray in thinking that Holmes-Moriarty story demonstrated the nonsensical nature of equilibrium. It simply demonstrated that Holmes and Moriarity were playing a non-repeated game in which an equilibrium did not exist.

To think about rational expectations as if it somehow results in equilibrium is nothing but a category error, akin to thinking about a triangle being caused by having angles whose angles add up to 180 degrees. The 180-degree sum of the angles of a triangle don’t cause the triangle; it is a property of the triangle.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists).

Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there are just a couple of variables about which agents have to form their rational expectations. The radical simplification of the expectational requirements for achieving a supposedly micro-founded equilibrium belies the claim to have achieved anything of the sort. Whether the micro-foundational pretense affected — with apparently sincere methodological fervor — by modern macroeconomics is merely self-delusional or a deliberate hoax perpetrated on a generation of unsuspecting students is an interesting distinction, but a distinction lacking any practical significance.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

 

On Equilibrium in Economic Theory

Here is the introduction to a new version of my paper, “Hayek and Three Concepts of Intertemporal Equilibrium” which I presented last June at the History of Economics Society meeting in Toronto, and which I presented piecemeal in a series of posts last May and June. This post corresponds to the first part of this post from last May 21.

Equilibrium is an essential concept in economics. While equilibrium is an essential concept in other sciences as well, and was probably imported into economics from physics, its meaning in economics cannot be straightforwardly transferred from physics into economics. The dissonance between the physical meaning of equilibrium and its economic interpretation required a lengthy process of explication and clarification, before the concept and its essential, though limited, role in economic theory could be coherently explained.

The concept of equilibrium having originally been imported from physics at some point in the nineteenth century, economists probably thought it natural to think of an economic system in equilibrium as analogous to a physical system at rest, in the sense of a system in which there was no movement or in the sense of all movements being repetitive. But what would it mean for an economic system to be at rest? The obvious answer was to say that prices of goods and the quantities produced, exchanged and consumed would not change. If supply equals demand in every market, and if there no exogenous disturbance displaces the system, e.g., in population, technology, tastes, etc., then there would seem to be no reason for the prices paid and quantities produced to change in that system. But that conception of an economic system at rest was understood to be overly restrictive, given the large, and perhaps causally important, share of economic activity – savings and investment – that is predicated on the assumption and expectation that prices and quantities not remain constant.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative to economists, but that view of equilibrium remained dominant in the nineteenth century and for perhaps the first quarter of the twentieth. Equilibrium was not an actual state that an economy could achieve, it was just an end state that economic processes would move toward if given sufficient time to play themselves out with no disturbing influences. This idea of a stationary timeless equilibrium is found in the writings of the classical economists, especially Ricardo and Mill who used the idea of a stationary state as the end-state towards which natural economic processes were driving an an economic system.

This, not very satisfactory, concept of equilibrium was undermined when Jevons, Menger, Walras, and their followers began to develop the idea of optimizing decisions by rational consumers and producers. The notion of optimality provided the key insight that made it possible to refashion the earlier classical equilibrium concept into a new, more fruitful and robust, version.

If each economic agent (household or business firm) is viewed as making optimal choices, based on some scale of preferences, and subject to limitations or constraints imposed by their capacities, endowments, technologies, and the legal system, then the equilibrium of an economy can be understood as a state in which each agent, given his subjective ranking of the feasible alternatives, is making an optimal decision, and each optimal decision is both consistent with, and contingent upon, those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell. But every decision, just like every piece in a jig-saw puzzle, must fit perfectly with every other decision. If any decision is suboptimal, none of the other decisions contingent upon that decision can be optimal.

The idea of an equilibrium as a set of independently conceived, mutually consistent, optimal plans was latent in the earlier notions of equilibrium, but it could only be coherently articulated on the basis of a notion of optimality. Originally framed in terms of utility maximization, the notion was gradually extended to encompass the ideas of cost minimization and profit maximization. The general concept of an optimal plan having been grasped, it then became possible to formulate a generically economic idea of equilibrium, not in terms of a system at rest, but in terms of the mutual consistency of optimal plans. Once equilibrium was conceived as the mutual consistency of optimal plans, the needlessly restrictiveness of defining equilibrium as a system at rest became readily apparent, though it remained little noticed and its significance overlooked for quite some time.

Because the defining characteristics of economic equilibrium are optimality and mutual consistency, change, even non-repetitive change, is not logically excluded from the concept of equilibrium as it was from the idea of an equilibrium as a stationary state. An optimal plan may be carried out, not just at a single moment, but over a period of time. Indeed, the idea of an optimal plan is, at the very least, suggestive of a future that need not simply repeat the present. So, once the idea of equilibrium as a set of mutually consistent optimal plans was grasped, it was to be expected that the concept of equilibrium could be formulated in a manner that accommodates the existence of change and development over time.

But the manner in which change and development could be incorporated into an equilibrium framework of optimality was not entirely straightforward, and it required an extended process of further intellectual reflection to formulate the idea of equilibrium in a way that gives meaning and relevance to the processes of change and development that make the passage of time something more than merely a name assigned to one of the n dimensions in vector space.

This paper examines the slow process by which the concept of equilibrium was transformed from a timeless or static concept into an intertemporal one by focusing on the pathbreaking contribution of F. A. Hayek who first articulated the concept, and exploring the connection between his articulation and three noteworthy, but very different, versions of intertemporal equilibrium: (1) an equilibrium of plans, prices, and expectations, (2) temporary equilibrium, and (3) rational-expectations equilibrium.

But before discussing these three versions of intertemporal equilibrium, I summarize in section two Hayek’s seminal 1937 contribution clarifying the necessary conditions for the existence of an intertemporal equilibrium. Then, in section three, I elaborate on an important, and often neglected, distinction, first stated and clarified by Hayek in his 1937 paper, between perfect foresight and what I call contingently correct foresight. That distinction is essential for an understanding of the distinction between the canonical Arrow-Debreu-McKenzie (ADM) model of general equilibrium, and Roy Radner’s 1972 generalization of that model as an equilibrium of plans, prices and price expectations, which I describe in section four.

Radner’s important generalization of the ADM model captured the spirit and formalized Hayek’s insights about the nature and empirical relevance of intertemporal equilibrium. But to be able to prove the existence of an equilibrium of plans, prices and price expectations, Radner had to make assumptions about agents that Hayek, in his philosophically parsimonious view of human knowledge and reason, had been unwilling to accept. In section five, I explore how J. R. Hicks’s concept of temporary equilibrium, clearly inspired by Hayek, though credited by Hicks to Erik Lindahl, provides an important bridge connecting the pure hypothetical equilibrium of correct expectations and perfect consistency of plans with the messy real world in which expectations are inevitably disappointed and plans routinely – and sometimes radically – revised. The advantage of the temporary-equilibrium framework is to provide the conceptual tools with which to understand how financial crises can occur and how such crises can be propagated and transformed into economic depressions, thereby making possible the kind of business-cycle model that Hayek tried unsuccessfully to create. But just as Hicks unaccountably failed to credit Hayek for the insights that inspired his temporary-equilibrium approach, Hayek failed to see the potential of temporary equilibrium as a modeling strategy that combines the theoretical discipline of the equilibrium method with the reality of expectational inconsistency across individual agents.

In section six, I discuss the Lucasian idea of rational expectations in macroeconomic models, mainly to point out that, in many ways, it simply assumes away the problem of plan expectational consistency with which Hayek, Hicks and Radner and others who developed the idea of intertemporal equilibrium were so profoundly concerned.

Hayek and Rational Expectations

In this, my final, installment on Hayek and intertemporal equilibrium, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. In his discussions of intertemporal equilibrium, Roy Radner assigns a meaning to the term “rational-expectations equilibrium” very different from the meaning normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents are able to make inferences about the beliefs held by other agents when observed prices differ from what they had expected prices to be. Agents attribute the differences between observed and expected prices to information held by agents better informed than themselves, and revise their own expectations accordingly in light of the information that would have justified the observed prices.

In the early 1950s, one very rational agent, Armen Alchian, was able to figure out what chemicals were being used in making the newly developed hydrogen bomb by identifying companies whose stock prices had risen too rapidly to be explained otherwise. Alchian, who spent almost his entire career at UCLA while also moonlighting at the nearby Rand Corporation, wrote a paper for Rand in which he listed the chemicals used in making the hydrogen bomb. When people at the Defense Department heard about the paper – the Rand Corporation was started as a think tank largely funded by the Department of Defense to do research that the Defense Department was interested in – they went to Alchian, confiscated and destroyed the paper. Joseph Newhard recently wrote a paper about this episode in the Journal of Corporate Finance. Here’s the abstract:

At RAND in 1954, Armen A. Alchian conducted the world’s first event study to infer the fuel material used in the manufacturing of the newly-developed hydrogen bomb. Successfully identifying lithium as the fusion fuel using only publicly available financial data, the paper was seen as a threat to national security and was immediately confiscated and destroyed. The bomb’s construction being secret at the time but having since been partially declassified, the nuclear tests of the early 1950s provide an opportunity to observe market efficiency through the dissemination of private information as it becomes public. I replicate Alchian’s event study of capital market reactions to the Operation Castle series of nuclear detonations in the Marshall Islands, beginning with the Bravo shot on March 1, 1954 at Bikini Atoll which remains the largest nuclear detonation in US history, confirming Alchian’s results. The Operation Castle tests pioneered the use of lithium deuteride dry fuel which paved the way for the development of high yield nuclear weapons deliverable by aircraft. I find significant upward movement in the price of Lithium Corp. relative to the other corporations and to DJIA in March 1954; within three weeks of Castle Bravo the stock was up 48% before settling down to a monthly return of 28% despite secrecy, scientific uncertainty, and public confusion surrounding the test; the company saw a return of 461% for the year.

Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of the future based on commonly shared knowledge.

So rather than pursue Radner’s conception of rational expectations, I will focus here on the conventional understanding of “rational expectations” in modern macroeconomics, which is that the price expectations formed by the agents in a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is a very important property that any model ought to have. It simply says that a model ought to have the property that if one assumes that the agents in a model expect the equilibrium predicted by the model, then, given those expectations, the solution of the model will turn out to be the equilibrium of the model. This property is a consistency and coherence property that any model, regardless of its substantive predictions, ought to have. If a model lacks this property, there is something wrong with the model.

But there is a huge difference between saying that a model should have the property that correct expectations are self-fulfilling and saying that agents are in fact capable of predicting the equilibrium of the model. Assuming the former does not entail the latter. What kind of crazy model would have the property that correct expectations are not self-fulfilling? I mean think about: a model in which correct expectations are not self-fulfilling is a nonsense model.

But demanding that a model not spout out jibberish is very different from insisting that the agents in the model necessarily have the capacity to predict what the equilibrium of the model will be. Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a matter of methodological fiat. But methodological fiat is what rational expectations has become in macroeconomics.

In his 1937 paper on intertemporal equilibrium, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most accurate description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that over time expectations do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth in the early 1960s, he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a paraticular market. The motivation for Muth to introduce the idea of a rational expectation was idea of a cobweb cycle in which producers simply assume that the current price will remain at whatever level currently prevails. If there is a time lag between production, as in agricultural markets between the initial application of inputs and the final yield of output, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – a point that Hayek understood better than perhaps anyone else — is that there is a huge difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists). Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there is but one or at most two variables about which agents have to form their rational expectations.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

Hayek and Intertemporal Equilibrium

I am starting to write a paper on Hayek and intertemporal equilibrium, and as I write it over the next couple of weeks, I am going to post sections of it on this blog. Comments from readers will be even more welcome than usual, and I will do my utmost to reply to comments, a goal that, I am sorry to say, I have not been living up to in my recent posts.

The idea of equilibrium is an essential concept in economics. It is an essential concept in other sciences as well, its meaning in economics is not the same as in other disciplines. The concept having originally been borrowed from physics, the meaning originally attached to it by economists corresponded to the notion of a system at rest, and it took a long time for economists to see that viewing an economy as a system at rest was not the only, or even the most useful, way of applying the equilibrium concept to economic phenomena.

What would it mean for an economic system to be at rest? The obvious answer was to say that prices and quantities would not change. If supply equals demand in every market, and if there no exogenous change introduced into the system, e.g., in population, technology, tastes, etc., it would seem that would be no reason for the prices paid and quantities produced to change in that system. But that view of an economic system was a very restrictive one, because such a large share of economic activity – savings and investment — is predicated on the assumption and expectation of change.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative, but that was the view of equilibrium that originally took hold in economics. The idea of a stationary timeless equilibrium can be traced back to the classical economists, especially Ricardo and Mill who wrote about the long-run tendency of an economic system toward a stationary state. But it was the introduction by Jevons, Menger, Walras and their followers of the idea of optimizing decisions by rational consumers and producers that provided the key insight for a more robust and fruitful version of the equilibrium concept.

If each economic agent (household or business firm) is viewed as making optimal choices based on some scale of preferences subject to limitations or constraints imposed by their capacities, endowments, technology and the legal system, then the equilibrium of an economy must describe a state in which each agent, given his own subjective ranking of the feasible alternatives, is making a optimal decision, and those optimal decisions are consistent with those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while also being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell.

The idea of an equilibrium as a set of independently conceived, mutually consistent optimal plans was latent in the earlier notions of equilibrium, but it could not be articulated until a concept of optimality had been defined. That concept was utility maximization and it was further extended to include the ideas of cost minimization and profit maximization. Once the idea of an optimal plan was worked out, the necessary conditions for the mutual consistency of optimal plans could be articulated as the necessary conditions for a general economic equilibrium. Once equilibrium was defined as the consistency of optimal plans, the path was clear to define an intertemporal equilibrium as the consistency of optimal plans extending over time. Because current goods and services and otherwise identical goods and services in the future could be treated as economically distinct goods and services, defining the conditions for an intertemporal equilibrium was formally almost equivalent to defining the conditions for a static, stationary equilibrium. Just as the conditions for a static equilibrium could be stated in terms of equalities between marginal rates of substitution of goods in consumption and in production to their corresponding price ratios, an intertemporal equilibrium could be stated in terms of equalities between the marginal rates of intertemporal substitution in consumption and in production and their corresponding intertemporal price ratios.

The only formal adjustment required in the necessary conditions for static equilibrium to be extended to intertemporal equilibrium was to recognize that, inasmuch as future prices (typically) are unobservable, and hence unknown to economic agents, the intertemporal price ratios cannot be ratios between actual current prices and actual future prices, but, instead, ratios between current prices and expected future prices. From this it followed that for optimal plans to be mutually consistent, all economic agents must have the same expectations of the future prices in terms of which their plans were optimized.

The concept of an intertemporal equilibrium was first presented in English by F. A. Hayek in his 1937 article “Economics and Knowledge.” But it was through J. R. Hicks’s Value and Capital published two years later in 1939 that the concept became more widely known and understood. In explaining and applying the concept of intertemporal equilibrium and introducing the derivative concept of a temporary equilibrium in which current markets clear, but individual expectations of future prices are not the same, Hicks did not claim originality, but instead of crediting Hayek for the concept, or even mentioning Hayek’s 1937 paper, Hicks credited the Swedish economist Erik Lindahl, who had published articles in the early 1930s in which he had articulated the concept. But although Lindahl had published his important work on intertemporal equilibrium before Hayek’s 1937 article, Hayek had already explained the concept in a 1928 article “Das intertemporale Gleichgewichtasystem der Priese und die Bewegungen des ‘Geltwertes.'” (English translation: “Intertemporal price equilibrium and movements in the value of money.“)

Having been a junior colleague of Hayek’s in the early 1930s when Hayek arrived at the London School of Economics, and having come very much under Hayek’s influence for a few years before moving in a different theoretical direction in the mid-1930s, Hicks was certainly aware of Hayek’s work on intertemporal equilibrium, so it has long been a puzzle to me why Hicks did not credit Hayek along with Lindahl for having developed the concept of intertemporal equilibrium. It might be worth pursuing that question, but I mention it now only as an aside, in the hope that someone else might find it interesting and worthwhile to try to find a solution to that puzzle. As a further aside, I will mention that Murray Milgate in a 1979 article “On the Origin of the Notion of ‘Intertemporal Equilibrium’” has previously tried to redress the failure to credit Hayek’s role in introducing the concept of intertemporal equilibrium into economic theory.

What I am going to discuss in here and in future posts are three distinct ways in which the concept of intertemporal equilibrium has been developed since Hayek’s early work – his 1928 and 1937 articles but also his 1941 discussion of intertemporal equilibrium in The Pure Theory of Capital. Of course, the best known development of the concept of intertemporal equilibrium is the Arrow-Debreu-McKenzie (ADM) general-equilibrium model. But although it can be thought of as a model of intertemporal equilibrium, the ADM model is set up in such a way that all economic decisions are taken before the clock even starts ticking; the transactions that are executed once the clock does start simply follow a pre-determined script. In the ADM model, the passage of time is a triviality, merely a way of recording the sequential order of the predetermined production and consumption activities. This feat is accomplished by assuming that all agents are present at time zero with their property endowments in hand and capable of transacting – but conditional on the determination of an equilibrium price vector that allows all optimal plans to be simultaneously executed over the entire duration of the model — in a complete set of markets (including state-contingent markets covering the entire range of contingent events that will unfold in the course of time whose outcomes could affect the wealth or well-being of any agent with the probabilities associated with every contingent event known in advance).

Just as identical goods in different physical locations or different time periods can be distinguished as different commodities that cn be purchased at different prices for delivery at specific times and places, identical goods can be distinguished under different states of the world (ice cream on July 4, 2017 in Washington DC at 2pm only if the temperature is greater than 90 degrees). Given the complete set of state-contingent markets and the known probabilities of the contingent events, an equilibrium price vector for the complete set of markets would give rise to optimal trades reallocating the risks associated with future contingent events and to an optimal allocation of resources over time. Although the ADM model is an intertemporal model only in a limited sense, it does provide an ideal benchmark describing the characteristics of a set of mutually consistent optimal plans.

The seminal work of Roy Radner in relaxing some of the extreme assumptions of the ADM model puts Hayek’s contribution to the understanding of the necessary conditions for an intertemporal equilibrium into proper perspective. At an informal level, Hayek was addressing the same kinds of problems that Radner analyzed with far more powerful analytical tools than were available to Hayek. But the were both concerned with a common problem: under what conditions could an economy with an incomplete set of markets be said to be in a state of intertemporal equilibrium? In an economy lacking the full set of forward and state contingent markets describing the ADM model, intertemporal equilibrium cannot predetermined before trading even begins, but must, if such an equilibrium obtains, unfold through the passage of time. Outcomes might be expected, but they would not be predetermined in advance. Echoing Hayek, though to my knowledge he does not refer to Hayek in his work, Radner describes his intertemporal equilibrium under uncertainty as an equilibrium of plans, prices, and price expectations. Even if it exists, the Radner equilibrium is not the same as the ADM equilibrium, because without a full set of markets, agents can’t fully hedge against, or insure, all the risks to which they are exposed. The distinction between ex ante and ex post is not eliminated in the Radner equilibrium, though it is eliminated in the ADM equilibrium.

Additionally, because all trades in the ADM model have been executed before “time” begins, it seems impossible to rationalize holding any asset whose only use is to serve as a medium of exchange. In his early writings on business cycles, e.g., Monetary Theory and the Trade Cycle, Hayek questioned whether it would be possible to rationalize the holding of money in the context of a model of full equilibrium, suggesting that monetary exchange, by severing the link between aggregate supply and aggregate demand characteristic of a barter economy as described by Say’s Law, was the source of systematic deviations from the intertemporal equilibrium corresponding to the solution of a system of Walrasian equations. Hayek suggested that progress in analyzing economic fluctuations would be possible only if the Walrasian equilibrium method could be somehow be extended to accommodate the existence of money, uncertainty, and other characteristics of the real world while maintaining the analytical discipline imposed by the equilibrium method and the optimization principle. It proved to be a task requiring resources that were beyond those at Hayek’s, or probably anyone else’s, disposal at the time. But it would be wrong to fault Hayek for having had to insight to perceive and frame a problem that was beyond his capacity to solve. What he may be criticized for is mistakenly believing that he he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.

In Value and Capital, Hicks also expressed doubts whether it would be possible to analyze the economic fluctuations characterizing the business cycle using a model of pure intertemporal equilibrium. He proposed an alternative approach for analyzing fluctuations which he called the method of temporary equilibrium. The essence of the temporary-equilibrium method is to analyze the behavior of an economy under the assumption that all markets for current delivery clear (in some not entirely clear sense of the term “clear”) while understanding that demand and supply in current markets depend not only on current prices but also upon expected future prices, and that the failure of current prices to equal what they had been expected to be is a potential cause for the plans that economic agents are trying to execute to be modified and possibly abandoned. In the Pure Theory of Capital, Hayek discussed Hicks’s temporary-equilibrium method a possible method of achieving the modification in the Walrasian method that he himself had proposed in Monetary Theory and the Trade Cycle. But after a brief critical discussion of the method, he dismissed it for reasons that remain obscure. Hayek’s rejection of the temporary-equilibrium method seems in retrospect to have been one of Hayek’s worst theoretical — or perhaps, meta-theoretical — blunders.

Decades later, C. J. Bliss developed the concept of temporary equilibrium to show that temporary equilibrium method can rationalize both holding an asset purely for its services as a medium of exchange and the existence of financial intermediaries (private banks) that supply financial assets held exclusively to serve as a medium of exchange. In such a temporary-equilibrium model with financial intermediaries, it seems possible to model not only the existence of private suppliers of a medium of exchange, but also the conditions – in a very general sense — under which the system of financial intermediaries breaks down. The key variable of course is vectors of expected prices subject to which the plans of individual households, business firms, and financial intermediaries are optimized. The critical point that emerges from Bliss’s analysis is that there are sets of expected prices, which if held by agents, are inconsistent with the existence of even a temporary equilibrium. Thus price flexibility in current market cannot, in principle, result in even a temporary equilibrium, because there is no price vector of current price in markets for present delivery that solves the temporary-equilibrium system. Even perfect price flexibility doesn’t lead to equilibrium if the equilibrium does not exist. And the equilibrium cannot exist if price expectations are in some sense “too far out of whack.”

Expected prices are thus, necessarily, equilibrating variables. But there is no economic mechanism that tends to cause the adjustment of expected prices so that they are consistent with the existence of even a temporary equilibrium, much less a full equilibrium.

Unfortunately, modern macroeconomics continues to neglect the temporary-equilibrium method; instead macroeconomists have for the most part insisted on the adoption of the rational-expectations hypothesis, a hypothesis that elevates question-begging to the status of a fundamental axiom of rationality. The crucial error in the rational-expectations hypothesis was to misunderstand the role of the comparative-statics method developed by Samuelson in The Foundations of Economic Analysis. The role of the comparative-statics method is to isolate the pure theoretical effect of a parameter change under a ceteris-paribus assumption. Such an effect could be derived only by comparing two equilibria under the assumption of a locally unique and stable equilibrium before and after the parameter change. But the method of comparative statics is completely inappropriate to most macroeconomic problems which are precisely concerned with the failure of the economy to achieve, or even to approximate, the unique and stable equilibrium state posited by the comparative-statics method.

Moreover, the original empirical application of the rational-expectations hypothesis by Muth was in the context of the behavior of a single market in which the market was dominated by well-informed specialists who could be presumed to have well-founded expectations of future prices conditional on a relatively stable economic environment. Under conditions of macroeconomic instability, there is good reason to doubt that the accumulated knowledge and experience of market participants would enable agents to form accurate expectations of the future course of prices even in those markets about which they expert knowledge. Insofar as the rational expectations hypothesis has any claim to empirical relevance it is only in the context of stable market situations that can be assumed to be already operating in the neighborhood of an equilibrium. For the kinds of problems that macroeconomists are really trying to answer that assumption is neither relevant nor appropriate.

Roger Farmer’s Prosperity for All

I have just read a review copy of Roger Farmer’s new book Prosperity for All, which distills many of Roger’s very interesting ideas into a form which, though readable, is still challenging — at least, it was for me. There is a lot that I like and agree with in Roger’s book, and the fact that he is a UCLA economist, though he came to UCLA after my departure, is certainly a point in his favor. So I will begin by mentioning some of the things that I really liked about Roger’s book.

What I like most is that he recognizes that beliefs are fundamental, which is almost exactly what I meant when I wrote this post (“Expectations Are Fundamental”) five years ago. The point I wanted to make is that the idea that there is some fundamental existential reality that economic agents try — and, if they are rational, will — perceive is a gross and misleading oversimplification, because expectations themselves are part of reality. In a world in which expectations are fundamental, the Keynesian beauty-contest theory of expectations and stock prices (described in chapter 12 of The General Theory) is not absurd as it is widely considered to be believers in the efficient market hypothesis. The almost universal unprofitability of simple trading rules or algorithms is not inconsistent with a market process in which the causality between prices and expectations goes in both directions, in which case anticipating expectations is no less rational than anticipating future cash flows.

One of the treats of reading this book is Farmer’s recollections of his time as a graduate student at Penn in the early 1980s when David Cass, Karl Shell, and Costas Azariadis were developing their theory of sunspot equilibrium in which expectations are self-fulfilling, an idea skillfully deployed by Roger to revise the basic New Keynesian model and re-orient it along a very different path from the standard New Keynesian one. I am sympathetic to that reorientation, and the main reason for that re-orientation is that Roger rejects the idea that there is a unique equilibrium to which the economy automatically reverts, albeit somewhat more slowly than if speeded along by the appropriate monetary policy, on its own. The notion that there is a unique equilibrium to which the economy automatically reverts is an assumption with no basis in theory or experience. The most that the natural-rate hypothesis can tell us is that if an economy is operating at its natural rate of unemployment, monetary expansion cannot permanently reduce the rate of unemployment below that natural rate. Eventually — once economic agents come to expect that the monetary expansion and the correspondingly higher rate of inflation will be maintained indefinitely — the unemployment rate must revert to the natural rate. But the natural-rate hypothesis does not tell us that monetary expansion cannot reduce unemployment when the actual unemployment rate exceeds the natural rate, although it is often misinterpreted as making that assertion.

In his book, Roger takes the anti-natural-rate argument a step further, asserting that the natural rate of unemployment rate is not unique. There is actually a range of unemployment rates at which the economy can permanently remain; which of those alternative natural rates the economy winds up at depends on the expectations held by the public about nominal future income. The higher expected future income, the greater consumption spending and, consequently, the greater employment. Things are a bit more complicated than I have just described them, because Roger also believes that consumption depends not on current income but on wealth. However, in the very simplified model with which Roger operates, wealth depends on expectations about future income. The more optimistic people are about their income-earning opportunities, the higher asset values; the higher asset values, the wealthier the public, and the greater consumption spending. The relationship between current income and expected future income is what Roger calls the belief function.

Thus, Roger juxtaposes a simple New Keynesian model against his own monetary model. The New Keynesian model consists of 1) an investment equals saving equilibrium condition (IS curve) describing the optimal consumption/savings decision of the representative individual as a locus of combinations of expected real interest rates and real income, based on the assumed rate of time preference of the representative individual, expected future income, and expected future inflation; 2) a Taylor rule describing how the monetary authority sets its nominal interest rate as a function of inflation and the output gap and its target (natural) nominal interest rate; 3) a short-run Phillips Curve that expresses actual inflation as a function of expected future inflation and the output gap. The three basic equations allow three endogenous variables, inflation, real income and the nominal rate of interest to be determined. The IS curve represents equilibrium combinations of real income and real interest rates; the Taylor rule determines a nominal interest rate; given the nominal rate determined by the Taylor rule, the IS curve can be redrawn to represent equilibrium combinations of real income and inflation. The intersection of the redrawn IS curve with the Phillips curve determines the inflation rate and real income.

Roger doesn’t like the New Keynesian model because he rejects the notion of a unique equilibrium with a unique natural rate of unemployment, a notion that I have argued is theoretically unfounded. Roger dismisses the natural-rate hypothesis on empirical grounds, the frequent observations of persistently high rates of unemployment being inconsistent with the idea that there are economic forces causing unemployment to revert back to the natural rate. Two responses to this empirical anomaly are possible: 1) the natural rate of unemployment is unstable, so that the observed persistence of high unemployment reflect increases in the underlying but unobservable natural rate of unemployment; 2) the adverse economic shocks that produce high unemployment are persistent, with unemployment returning to a natural level only after the adverse shocks have ceased. In the absence of independent empirical tests of the hypothesis that the natural rate of unemployment has changed, or of the hypothesis that adverse shocks causing unemployment to rise above the natural rate are persistent, neither of these responses is plausible, much less persuasive.

So Roger recasts the basic New Keynesian model in a very different form. While maintaining the Taylor Rule, he rewrites the IS curve so that it describes a relationship between the nominal interest rate and the expected growth of nominal income given the assumed rate of time preference, and in place of the Phillips Curve, he substitutes his belief function, which says that the expected growth of nominal income in the next period equals the current rate of growth. The IS curve and the Taylor Rule provide two steady state equations in three variables, nominal income growth, nominal interest rate and inflation, so that the rate of inflation is left undetermined. Once the belief function specifies the expected rate of growth of nominal income, the nominal interest rate consistent with expected nominal-income growth is determined. Since the belief function tells us only that the expected nominal-income growth equals the current rate of nominal-income growth, any change in nominal-income growth persists into the next period.

At any rate, Roger’s policy proposal is not to change the interest-rate rule followed by the monetary authority, but to propose a rule whereby the monetary authority influences the public’s expectations of nominal-income growth. The greater expected nominal-income growth, the greater wealth, and the greater consumption expenditures. The greater consumption expenditures, the greater income and employment. Expectations are self-fulfilling. Roger therefore advocates a policy by which the government buys and sells a stock-market index fund in order to keep overall wealth at a level that will generate enough consumption expenditures to support maximum sustainable employment.

This is a quick summary of some of the main substantive arguments that Roger makes in his book, and I hope that I have not misrepresented them too badly. As I have already said, I very much sympathize with his criticism of the New Keynesian model, and I agree with nearly all of his criticisms. I also agree wholeheartedly with his emphasis on the importance of expectations and on self-fulfilling character of expectations. Nevertheless, I have to admit that I have trouble taking Roger’s own monetary model and his policy proposal for stabilizing a broad index of equity prices over time seriously. And the reason I am so skeptical about Roger’s model and his policy recommendation is that his model, which does after all bear at least a family resemblance to the simple New Keynesian model, strikes me as being far too simplified to be credible as a representation of a real-world economy. His model, like the New Keynesian model, is an intertemporal model with neither money nor real capital, and the idea that there is an interest rate in such model is, though theoretically defensible, not very plausible. There may be a sequence of periods in such a model in which some form of intertemporal exchange takes place, but without explicitly introducing at least one good that is carried over from period to period, the extent of intertemporal trading is limited and devoid of the arbitrage constraints inherent in a system in which real assets are held from one period to the next.

So I am very skeptical about any macroeconomic model with no market for real assets so that the interest rate interacts with asset values and expected future prices in such a way that the existing stock of durable assets is willingly held over time. The simple New Keynesian model in which there is no money and no durable assets, but simply bonds whose existence is difficult to rationalize in the absence of money or durable assets, does not strike me as a sound foundation for making macroeconomic policy. An interest rate may exist in such a model, but such a model strikes me as woefully inadequate for macroeconomic policy analysis. And although Roger has certainly offered some interesting improvements on the simple New Keynesian model, I would not be willing to rely on Roger’s monetary model for the sweeping policy and institutional recommendations that he proposes, especially his proposal for stabilizing the long-run growth path of a broad index of stock prices.

This is an important point, so I will try to restate it within a wider context. Modern macroeconomics, of which Roger’s model is one of the more interesting examples, flatters itself by claiming to be grounded in the secure microfoundations of the Arrow-Debreu-McKenzie general equilibrium model. But the great achievement of the ADM model was to show the logical possibility of an equilibrium of the independently formulated, optimizing plans of an unlimited number of economic agents producing and trading an unlimited number of commodities over an unlimited number of time periods.

To prove the mutual consistency of such a decentralized decision-making process coordinated by a system of equilibrium prices was a remarkable intellectual achievement. Modern macroeconomics deceptively trades on the prestige of this achievement in claiming to be founded on the ADM general-equilibrium model; the claim is at best misleading, because modern macroeconomics collapses the multiplicity of goods, services, and assets into a single non-durable commodity, so that the only relevant plan the agents in the modern macromodel are called upon to make is a decision about how much to spend in the current period given a shared utility function and a shared production technology for the single output. In the process, all the hard work performed by the ADM general-equilibrium model in explaining how a system of competitive prices could achieve an equilibrium of the complex independent — but interdependent — intertemporal plans of a multitude of decision-makers is effectively discarded and disregarded.

This approach to macroeconomics is not microfounded, but its opposite. The approach relies on the assumption that all but a very small set of microeconomic issues are irrelevant to macroeconomics. Now it is legitimate for macroeconomics to disregard many microeconomic issues, but the assumption that there is continuous microeconomic coordination, apart from the handful of potential imperfections on which modern macroeconomics chooses to focus is not legitimate. In particular, to collapse the entire economy into a single output, implies that all the separate markets encompassed by an actual economy are in equilibrium and that the equilibrium is maintained over time. For that equilibrium to be maintained over time, agents must formulate correct expectations of all the individual relative prices that prevail in those markets over time. The ADM model sidestepped that expectational problem by assuming that a full set of current and forward markets exists in the initial period and that all the agents participating in the economy are present and endowed with wealth enabling them to trade in the initial period. Under those rather demanding assumptions, if an equilibrium price vector covering all current and future markets is arrived at, the optimizing agents will formulate a set of mutually consistent optimal plans conditional on that vector of equilibrium prices so that all the optimal plans can and will be carried out as time happily unfolds for as long as the agents continue in their blissful existence.

However, without a complete set of current and forward markets, achieving the full equilibrium of the ADM model requires that agents formulate consistent expectations of the future prices that will be realized only over the course of time not in the initial period. Roy Radner, who extended the ADM model to accommodate the case of incomplete markets, called such a sequential equilibrium, an equilibrium of plans, prices and expectations. The sequential equilibrium described by Radner has the property that expectations are rational, but the assumption of rational expectations for all future prices over a sequence of future time periods is so unbelievably outlandish as an approximation to reality — sort of like the assumption that it could be 76 degrees fahrenheit in Washington DC in February — that to build that assumption into a macroeconomic model is an absurdity of mind-boggling proportions. But that is precisely what modern macroeconomics, in both its Real Business Cycle and New Keynesian incarnations, has done.

If instead of the sequential equilibrium of plans, prices and expectations, one tries to model an economy in which the price expectations of agents can be inconsistent, while prices adjust within any period to clear markets – the method of temporary equilibrium first described by Hicks in Value and Capital – one can begin to develop a richer conception of how a macroeconomic system can be subject to the financial disturbances, and financial crises to which modern macroeconomies are occasionally, if not routinely, vulnerable. But that would require a reorientation, if not a repudiation, of the path on which macroeconomics has been resolutely marching for nigh on forty years. In his 1984 paper “Consistent Temporary Equilibrium,” published in a volume edited by J. P. Fitoussi, C. J. Bliss made a start on developing such a macroeconomic theory.

There are few economists better equipped than Roger Farmer to lead macroeconomics onto a new and more productive path. He has not done so in this book, but I am hoping that, in his next one, he will.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com