Archive for the 'Arrow-Debrew-McKenzie model' Category

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.

The Rises and Falls of Keynesianism and Monetarism

The following is extracted from a paper on the history of macroeconomics that I’m now writing. I don’t know yet where or when it will be published and there may or may not be further installments, but I would be interested in any comments or suggestions that readers might have. Regular readers, if there are any, will probably recognize some familiar themes that I’ve been writing about in a number of my posts over the past several months. So despite the diminished frequency of my posting, I haven’t been entirely idle.

Recognizing the cognitive dissonance between the vision of the optimal equilibrium of a competitive market economy described by Marshallian economic theory and the massive unemployment of the Great Depression, Keynes offered an alternative, and, in his view, more general, theory, the optimal neoclassical equilibrium being a special case.[1] The explanatory barrier that Keynes struggled, not quite successfully, to overcome in the dire circumstances of the 1930s, was why market-price adjustments do not have the equilibrating tendencies attributed to them by Marshallian theory. The power of Keynes’s analysis, enhanced by his rhetorical gifts, enabled him to persuade much of the economics profession, especially many of the most gifted younger economists at the time, that he was right. But his argument, failing to expose the key weakness in the neoclassical orthodoxy, was incomplete.

The full title of Keynes’s book, The General Theory of Employment, Interest and Money identifies the key elements of his revision of neoclassical theory. First, contrary to a simplistic application of Marshallian theory, the mass unemployment of the Great Depression would not be substantially reduced by cutting wages to “clear” the labor market. The reason, according to Keynes, is that the levels of output and unemployment depend not on money wages, but on planned total spending (aggregate demand). Mass unemployment is the result of too little spending not excessive wages. Reducing wages would simply cause a corresponding decline in total spending, without increasing output or employment.

If wage cuts do not increase output and employment, the ensuing high unemployment, Keynes argued, is involuntary, not the outcome of optimizing choices made by workers and employers. Ever since, the notion that unemployment can be involuntary has remained a contested issue between Keynesians and neoclassicists, a contest requiring resolution in favor of one or the other theory or some reconciliation of the two.

Besides rejecting the neoclassical theory of employment, Keynes also famously disputed the neoclassical theory of interest by arguing that the rate of interest is not, as in the neoclassical theory, a reward for saving, but a reward for sacrificing liquidity. In Keynes’s view, rather than equilibrate savings and investment, interest equilibrates the demand to hold the money issued by the monetary authority with the amount issued by the monetary authority. Under the neoclassical theory, it is the price level that adjusts to equilibrate the demand for money with the quantity issued.

Had Keynes been more attuned to the Walrasian paradigm, he might have recast his argument that cutting wages would not eliminate unemployment by noting the inapplicability of a Marshallian supply-demand analysis of the labor market (accounting for over 50 percent of national income), because wage cuts would shift demand and supply curves in almost every other input and output market, grossly violating the ceteris-paribus assumption underlying Marshallian supply-demand paradigm. When every change in the wage shifts supply and demand curves in all markets for good and services, which in turn causes the labor-demand and labor-supply curves to shift, a supply-demand analysis of aggregate unemployment becomes a futile exercise.

Keynes’s work had two immediate effects on economics and economists. First, it immediately opened up a new field of research – macroeconomics – based on his theory that total output and employment are determined by aggregate demand. Representing only one element of Keynes’s argument, the simplified Keynesian model, on which macroeconomic theory was founded, seemed disconnected from either the Marshallian or Walrasian versions of neoclassical theory.

Second, the apparent disconnect between the simple Keynesian macro-model and neoclassical theory provoked an ongoing debate about the extent to which Keynesian theory could be deduced, or even reconciled, with the premises of neoclassical theory. Initial steps toward a reconciliation were provided when a model incorporating the quantity of money and the interest rate into the Keynesian analysis was introduced, soon becoming the canonical macroeconomic model of undergraduate and graduate textbooks.

Critics of Keynesian theory, usually those opposed to its support for deficit spending as a tool of aggregate demand management, its supposed inflationary bias, and its encouragement or toleration of government intervention in the free-market economy, tried to debunk Keynesianism by pointing out its inconsistencies with the neoclassical doctrine of a self-regulating market economy. But proponents of Keynesian precepts were also trying to reconcile Keynesian analysis with neoclassical theory. Future Nobel Prize winners like J. R. Hicks, J. E. Meade, Paul Samuelson, Franco Modigliani, James Tobin, and Lawrence Klein all derived various Keynesian propositions from neoclassical assumptions, usually by resorting to the un-Keynesian assumption of rigid or sticky prices and wages.

What both Keynesian and neoclassical economists failed to see is that, notwithstanding the optimality of an economy with equilibrium market prices, in either the Walrasian or the Marshallian versions, cannot explain either how that set of equilibrium prices is, or can be, found, or how it results automatically from the routine operation of free markets.

The assumption made implicitly by both Keynesians and neoclassicals was that, in an ideal perfectly competitive free-market economy, prices would adjust, if not instantaneously, at least eventually, to their equilibrium, market-clearing, levels so that the economy would achieve an equilibrium state. Not all Keynesians, of course, agreed that a perfectly competitive economy would reach that outcome, even in the long-run. But, according to neoclassical theory, equilibrium is the state toward which a competitive economy is drawn.

Keynesian policy could therefore be rationalized as an instrument for reversing departures from equilibrium and ensuring that such departures are relatively small and transitory. Notwithstanding Keynes’s explicit argument that wage cuts cannot eliminate involuntary unemployment, the sticky-prices-and-wages story was too convenient not to be adopted as a rationalization of Keynesian policy while also reconciling that policy with the neoclassical orthodoxy associated with the postwar ascendancy of the Walrasian paradigm.

The Walrasian ascendancy in neoclassical theory was the culmination of a silent revolution beginning in the late 1920s when the work of Walras and his successors was taken up by a younger generation of mathematically trained economists. The revolution proceeded along many fronts, of which the most important was proving the existence of a solution of the system of equations describing a general equilibrium for a competitive economy — a proof that Walras himself had not provided. The sophisticated mathematics used to describe the relevant general-equilibrium models and derive mathematically rigorous proofs encouraged the process of rapid development, adoption and application of mathematical techniques by subsequent generations of economists.

Despite the early success of the Walrasian paradigm, Kenneth Arrow, perhaps the most important Walrasian theorist of the second half of the twentieth century, drew attention to the explanatory gap within the paradigm: how the adjustment of disequilibrium prices is possible in a model of perfect competition in which every transactor takes market price as given. The Walrasian theory shows that a competitive equilibrium ensuring the consistency of agents’ plans to buy and sell results from an equilibrium set of prices for all goods and services. But the theory is silent about how those equilibrium prices are found and communicated to the agents of the model, the Walrasian tâtonnement process being an empirically empty heuristic artifact.

In fact, the explanatory gap identified by Arrow was even wider than he had suggested or realized, for another aspect of the Walrasian revolution of the late 1920s and 1930s was the extension of the equilibrium concept from a single-period equilibrium to an intertemporal equilibrium. Although earlier works by Irving Fisher and Frank Knight laid a foundation for this extension, the explicit articulation of intertemporal-equilibrium analysis was the nearly simultaneous contribution of three young economists, two Swedes (Myrdal and Lindahl) and an Austrian (Hayek) whose significance, despite being partially incorporated into the canonical Arrow-Debreu-McKenzie version of the Walrasian model, remains insufficiently recognized.

These three economists transformed the concept of equilibrium from an unchanging static economic system at rest to a dynamic system changing from period to period. While Walras and Marshall had conceived of a single-period equilibrium with no tendency to change barring an exogenous change in underlying conditions, Myrdal, Lindahl and Hayek conceived of an equilibrium unfolding through time, defined by the mutual consistency of the optimal plans of disparate agents to buy and sell in the present and in the future.

In formulating optimal plans that extend through time, agents consider both the current prices at which they can buy and sell, and the prices at which they will (or expect to) be able to buy and sell in the future. Although it may sometimes be possible to buy or sell forward at a currently quoted price for future delivery, agents planning to buy and sell goods or services rely, for the most part, on their expectations of future prices. Those expectations, of course, need not always turn out to have been accurate.

The dynamic equilibrium described by Myrdal, Lindahl and Hayek is a contingent event in which all agents have correctly anticipated the future prices on which they have based their plans. In the event that some, if not all, agents have incorrectly anticipated future prices, those agents whose plans were based on incorrect expectations may have to revise their plans or be unable to execute them. But unless all agents share the same expectations of future prices, their expectations cannot all be correct, and some of those plans may not be realized.

The impossibility of an intertemporal equilibrium of optimal plans if agents do not share the same expectations of future prices implies that the adjustment of perfectly flexible market prices is not sufficient an optimal equilibrium to be achieved. I shall have more to say about this point below, but for now I want to note that the growing interest in the quiet Walrasian revolution in neoclassical theory that occurred almost simultaneously with the Keynesian revolution made it inevitable that Keynesian models would be recast in explicitly Walrasian terms.

What emerged from the Walrasian reformulation of Keynesian analysis was the neoclassical synthesis that became the textbook version of macroeconomics in the 1960s and 1970s. But the seemingly anomalous conjunction of both inflation and unemployment during the 1970s led to a reconsideration and widespread rejection of the Keynesian proposition that output and employment are directly related to aggregate demand.

Indeed, supporters of the Monetarist views of Milton Friedman argued that the high inflation and unemployment of the 1970s amounted to an empirical refutation of the Keynesian system. But Friedman’s political conservatism, free-market ideology, and his acerbic criticism of Keynesian policies obscured the extent to which his largely atheoretical monetary thinking was influenced by Keynesian and Marshallian concepts that rendered his version of Monetarism an unattractive alternative for younger monetary theorists, schooled in the Walrasian version of neoclassicism, who were seeking a clear theoretical contrast with the Keynesian macro model.

The brief Monetarist ascendancy following 1970s inflation conveniently collapsed in the early 1980s, after Friedman’s Monetarist policy advice for controlling the quantity of money proved unworkable, when central banks, foolishly trying to implement the advice, prolonged a needlessly deep recession while central banks consistently overshot their monetary targets, thereby provoking a long series of embarrassing warnings from Friedman about the imminent return of double-digit inflation.


[1] Hayek, both a friend and a foe of Keynes, would chide Keynes decades after Keynes’s death for calling his theory a general theory when, in Hayek’s view, it was a special theory relevant only in periods of substantially less than full employment when increasing aggregate demand could increase total output. But in making this criticism, Hayek, himself, implicitly assumed that which he had himself admitted in his theory of intertemporal equilibrium that there is no automatic equilibration mechanism that ensures that general equilibrium obtains.

The Explanatory Gap and Mengerian Subjectivism

My last several posts have been focused on Marshall and Walras and the relationships and differences between the partial equilibrium approach of Marshall and the general-equilibrium approach of Walras and how that current state of neoclassical economics is divided between the more practical applied approach of Marshallian partial-equilibrium analysis and the more theoretical general-equilibrium approach of Walras. The divide is particularly important for the history of macroeconomics, because many of the macroeconomic controversies in the decades since Keynes have also involved differences between Marshallians and Walrasians. I’m not happy with either the Marshallian or Walrasian approach, and I have been trying to articulate my unhappiness with both branches of current neoclassical thinking by going back to the work of the forgotten marginal revolutionary, Carl Menger. I’ve been writing a paper for a conference later this month celebrating the 150th anniversary of Menger’s great work which draws on some of my recent musings, because I think it offers at least some hints at how to go about developing an improved neoclassical theory. Here’s a further sampling of my thinking which is drawn from one of the sections of my work in progress.

Both the Marshallian and the Walrasian versions of equilibrium analysis have failed to bridge an explanatory gap between the equilibrium state, whose existence is crucial for such empirical content as can be claimed on behalf of those versions of neoclassical theory, and such an equilibrium state could ever be attained. The gap was identified by one of the chief architects of modern neoclassical theory, Kenneth Arrow, in his 1958 paper “Toward a Theory of Price Adjustment.”

The equilibrium is defined in terms of a set of prices. In the Marshallian version, the equilibrium prices are assumed to have already been determined in all but a single market (or perhaps a subset of closely related markets), so that the Marshallian equilibrium simply represents how, in a single small or isolated market, an equilibrium price in that market is determined, under suitable ceteris-paribus conditions thereby leaving the equilibrium prices determined in other markets unaffected.

In the Walrasian version, all prices in all markets are determined simultaneously, but the method for determining those prices simultaneously was not spelled out by Walras other than by reference to the admittedly fictitious and purely heuristic tâtonnement process.

Both the Marshallian and Walrasian versions can show that equilibrium has optimal properties, but neither version can explain how the equilibrium is reached or how it can be discovered in practice. This is true even in the single-period context in which the Walrasian and Marshallian equilibrium analyses were originally carried out.

The single-period equilibrium has been extended, at least in a formal way, in the standard Arrow-Debreu-McKenzie (ADM) version of the Walrasian equilibrium, but this version is in important respects just an enhanced version of a single-period model inasmuch as all trades take place at time zero in a complete array of future state-contingent markets. So it is something of a stretch to consider the ADM model a truly intertemporal model in which the future can unfold in potentially surprising ways as opposed to just playing out a script already written in which agents go through the motions of executing a set of consistent plans to produce, purchase and sell in a sequence of predetermined actions.

Under less extreme assumptions than those of the ADM model, an intertemporal equilibrium involves both equilibrium current prices and equilibrium expected prices, and just as the equilibrium current prices are the same for all agents, equilibrium expected future prices must be equal for all agents. In his 1937 exposition of the concept of intertemporal equilibrium, Hayek explained the difference between what agents are assumed to know in a state of intertemporal equilibrium and what they are assumed to know in a single-period equilibrium.

If all agents share common knowledge, it may be plausible to assume that they will rationally arrive at similar expectations of the future prices. But if their stock of knowledge consists of both common knowledge and private knowledge, then it seems implausible to assume that the price expectations of different agents will always be in accord. Nevertheless, it is not necessarily inconceivable, though perhaps improbable, that agents will all arrive at the same expectations of future prices.

In the single-period equilibrium, all agents share common knowledge of equilibrium prices of all commodities. But in intertemporal equilibrium, agents lack knowledge of the future, but can only form expectations of future prices derived from their own, more or less accurate, stock of private knowledge. However, an equilibrium may still come about if, based on their private knowledge, they arrive at sufficiently similar expectations of future prices for their plans for their current and future purchases and sales to be mutually compatible.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that there is no neoclassical theory of how competitive prices can change, Milgrom and Stokey turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at prices other than equilibrium prices, so that it would be impossible for a trader with private information to take advantage of that information. This argument seems to suffer from a widely shared misunderstanding of what rational expectations signify.

Thus, in the Mengerian view articulated by Hayek, intertemporal equilibrium, given the diversity of private knowledge and expectations, is an unlikely, but not inconceivable, state of affairs, a view that stands in sharp contrast to the argument of Paul Milgrom and Nancy Stokey (1982), in which they argue that under a rational-expectations equilibrium there is no private knowledge, only common knowledge, and that it would be impossible for any trader to trade on private knowledge, because no other trader with rational expectations would be willing to trade with anyone at a price other than the equilibrium price.

Rational expectations is not a property of individual agents making rational and efficient use of the information from whatever source it is acquired. As I have previously explained here (and a revised version here) rational expectations is a property of intertemporal equilibrium; it is not an intrinsic property that agents have by virtue of being rational, just as the fact that the three angles in a triangle sum to 180 degrees is not a property of the angles qua angles, but a property of the triangle. When the expectations that agents hold about future prices are identical, their expectations are equilibrium expectations and they are rational. That the agents hold rational expectations in equilibrium, does not mean that the agents are possessed of the power to calculate equilibrium prices or even to know if their expectations of future prices are equilibrium expectations. Equilibrium is the cause of rational expectations; rational expectations do not exist if the conditions for equilibrium aren’t satisfied. See Blume, Curry and Easley (2006).

The assumption, now routinely regarded as axiomatic, that rational expectations is sufficient to ensure that equilibrium is automatic achieved, and that agents’ price expectations necessarily correspond to equilibrium price expectations is a form of question begging disguised as a methodological imperative that requires all macroeconomic models to be properly microfounded. The newly published volume edited by Arnon, Young and van der Beek Expectations: Theory and Applications from Historical Perspectives contains a wonderful essay by Duncan Foley that elucidates these issues.

In his centenary retrospective on Menger’s contribution, Hayek (1970), commenting on the inexactness of Menger’s account of economic theory, focused on Menger’s reluctance to embrace mathematics as an expository medium with which to articulate economic-theoretical concepts. While this may have been an aspect of Menger’s skepticism about mathematical reasoning, his recognition that expectations of the future are inherently inexact and conjectural and more akin to a range of potential outcomes of different probability may have been an even more significant factor in how Menger chose to articulate his theoretical vision.

But it is noteworthy that Hayek (1937) explicitly recognized that there is no theoretical explanation that accounts for any tendency toward intertemporal equilibrium, and instead merely (and in 1937!) relied an empirical tendency of economies to move in the direction of equilibrium as a justification for considering economic theory to have any practical relevance.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

On Equilibrium in Economic Theory

Here is the introduction to a new version of my paper, “Hayek and Three Concepts of Intertemporal Equilibrium” which I presented last June at the History of Economics Society meeting in Toronto, and which I presented piecemeal in a series of posts last May and June. This post corresponds to the first part of this post from last May 21.

Equilibrium is an essential concept in economics. While equilibrium is an essential concept in other sciences as well, and was probably imported into economics from physics, its meaning in economics cannot be straightforwardly transferred from physics into economics. The dissonance between the physical meaning of equilibrium and its economic interpretation required a lengthy process of explication and clarification, before the concept and its essential, though limited, role in economic theory could be coherently explained.

The concept of equilibrium having originally been imported from physics at some point in the nineteenth century, economists probably thought it natural to think of an economic system in equilibrium as analogous to a physical system at rest, in the sense of a system in which there was no movement or in the sense of all movements being repetitive. But what would it mean for an economic system to be at rest? The obvious answer was to say that prices of goods and the quantities produced, exchanged and consumed would not change. If supply equals demand in every market, and if there no exogenous disturbance displaces the system, e.g., in population, technology, tastes, etc., then there would seem to be no reason for the prices paid and quantities produced to change in that system. But that conception of an economic system at rest was understood to be overly restrictive, given the large, and perhaps causally important, share of economic activity – savings and investment – that is predicated on the assumption and expectation that prices and quantities not remain constant.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative to economists, but that view of equilibrium remained dominant in the nineteenth century and for perhaps the first quarter of the twentieth. Equilibrium was not an actual state that an economy could achieve, it was just an end state that economic processes would move toward if given sufficient time to play themselves out with no disturbing influences. This idea of a stationary timeless equilibrium is found in the writings of the classical economists, especially Ricardo and Mill who used the idea of a stationary state as the end-state towards which natural economic processes were driving an an economic system.

This, not very satisfactory, concept of equilibrium was undermined when Jevons, Menger, Walras, and their followers began to develop the idea of optimizing decisions by rational consumers and producers. The notion of optimality provided the key insight that made it possible to refashion the earlier classical equilibrium concept into a new, more fruitful and robust, version.

If each economic agent (household or business firm) is viewed as making optimal choices, based on some scale of preferences, and subject to limitations or constraints imposed by their capacities, endowments, technologies, and the legal system, then the equilibrium of an economy can be understood as a state in which each agent, given his subjective ranking of the feasible alternatives, is making an optimal decision, and each optimal decision is both consistent with, and contingent upon, those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell. But every decision, just like every piece in a jig-saw puzzle, must fit perfectly with every other decision. If any decision is suboptimal, none of the other decisions contingent upon that decision can be optimal.

The idea of an equilibrium as a set of independently conceived, mutually consistent, optimal plans was latent in the earlier notions of equilibrium, but it could only be coherently articulated on the basis of a notion of optimality. Originally framed in terms of utility maximization, the notion was gradually extended to encompass the ideas of cost minimization and profit maximization. The general concept of an optimal plan having been grasped, it then became possible to formulate a generically economic idea of equilibrium, not in terms of a system at rest, but in terms of the mutual consistency of optimal plans. Once equilibrium was conceived as the mutual consistency of optimal plans, the needlessly restrictiveness of defining equilibrium as a system at rest became readily apparent, though it remained little noticed and its significance overlooked for quite some time.

Because the defining characteristics of economic equilibrium are optimality and mutual consistency, change, even non-repetitive change, is not logically excluded from the concept of equilibrium as it was from the idea of an equilibrium as a stationary state. An optimal plan may be carried out, not just at a single moment, but over a period of time. Indeed, the idea of an optimal plan is, at the very least, suggestive of a future that need not simply repeat the present. So, once the idea of equilibrium as a set of mutually consistent optimal plans was grasped, it was to be expected that the concept of equilibrium could be formulated in a manner that accommodates the existence of change and development over time.

But the manner in which change and development could be incorporated into an equilibrium framework of optimality was not entirely straightforward, and it required an extended process of further intellectual reflection to formulate the idea of equilibrium in a way that gives meaning and relevance to the processes of change and development that make the passage of time something more than merely a name assigned to one of the n dimensions in vector space.

This paper examines the slow process by which the concept of equilibrium was transformed from a timeless or static concept into an intertemporal one by focusing on the pathbreaking contribution of F. A. Hayek who first articulated the concept, and exploring the connection between his articulation and three noteworthy, but very different, versions of intertemporal equilibrium: (1) an equilibrium of plans, prices, and expectations, (2) temporary equilibrium, and (3) rational-expectations equilibrium.

But before discussing these three versions of intertemporal equilibrium, I summarize in section two Hayek’s seminal 1937 contribution clarifying the necessary conditions for the existence of an intertemporal equilibrium. Then, in section three, I elaborate on an important, and often neglected, distinction, first stated and clarified by Hayek in his 1937 paper, between perfect foresight and what I call contingently correct foresight. That distinction is essential for an understanding of the distinction between the canonical Arrow-Debreu-McKenzie (ADM) model of general equilibrium, and Roy Radner’s 1972 generalization of that model as an equilibrium of plans, prices and price expectations, which I describe in section four.

Radner’s important generalization of the ADM model captured the spirit and formalized Hayek’s insights about the nature and empirical relevance of intertemporal equilibrium. But to be able to prove the existence of an equilibrium of plans, prices and price expectations, Radner had to make assumptions about agents that Hayek, in his philosophically parsimonious view of human knowledge and reason, had been unwilling to accept. In section five, I explore how J. R. Hicks’s concept of temporary equilibrium, clearly inspired by Hayek, though credited by Hicks to Erik Lindahl, provides an important bridge connecting the pure hypothetical equilibrium of correct expectations and perfect consistency of plans with the messy real world in which expectations are inevitably disappointed and plans routinely – and sometimes radically – revised. The advantage of the temporary-equilibrium framework is to provide the conceptual tools with which to understand how financial crises can occur and how such crises can be propagated and transformed into economic depressions, thereby making possible the kind of business-cycle model that Hayek tried unsuccessfully to create. But just as Hicks unaccountably failed to credit Hayek for the insights that inspired his temporary-equilibrium approach, Hayek failed to see the potential of temporary equilibrium as a modeling strategy that combines the theoretical discipline of the equilibrium method with the reality of expectational inconsistency across individual agents.

In section six, I discuss the Lucasian idea of rational expectations in macroeconomic models, mainly to point out that, in many ways, it simply assumes away the problem of plan expectational consistency with which Hayek, Hicks and Radner and others who developed the idea of intertemporal equilibrium were so profoundly concerned.

Roy Radner and the Equilibrium of Plans, Prices and Price Expectations

In this post I want to discuss Roy Radner’s treatment of an equilibrium of plans, prices, and price expectations (EPPPE) and its relationship to Hayek’s conception of intertemporal equilibrium, of which Radner’s treatment is a technically more sophisticated version. Although I seen no evidence that Radner was directly influenced by Hayek’s work, I consider Radner’s conception of EPPPE to be a version of Hayek’s conception of intertemporal equilibrium, because it captures essential properties of Hayek’s conception of intertemporal equilibrium as a situation in which agents independently formulate their own optimizing plans based on the prices that they actually observe – their common knowledge – and on the future prices that they expect to observe over the course of their planning horizons. While currently observed prices are common knowledge – not necessarily a factual description of economic reality but not an entirely unreasonable simplifying assumption – the prices that individual agents expect to observe in the future are subjective knowledge based on whatever common or private knowledge individuals may have and whatever methods they may be using to form their expectations of the prices that will be observed in the future. An intertemporal equilibrium refers to a set of decentralized plans that are both a) optimal from the standpoint of every agent’s own objectives given their common knowledge of current prices and their subjective expectations of future prices and b) mutually consistent.

If an agent has chosen an optimal plan given current and expected future prices, that plan will not be changed unless the agent acquires new information that renders the existing plan sub-optimal relative to the new information. Otherwise, there would be no reason for the agent to deviate from an optimal plan. The new information that could cause an agent to change a formerly optimal plan would either affect the preferences of the agent, the technology available to the agent, or would somehow be reflected in current prices or in expected future prices. But it seems improbable that there could be a change in preferences or technology would not also be reflected in current or expected future prices. So absent a change in current or expected future prices, there would seem to be almost no likelihood that an agent would deviate from a plan that was optimal given current prices and the future prices expected by the agent.

The mutual consistency of the optimizing plans of independent agents therefore turns out to be equivalent to the condition that all agents observe the same current prices – their common knowledge – and have exactly the same forecasts of the future prices upon which they have relied in choosing their optimal plans. Even should their forecasts of future prices turn out to be wrong, at the moment before their forecasts of future prices were changed or disproved by observation, their plans were still mutually consistent relative to the information on which their plans had been chosen. The failure of the equilibrium to be maintained could be attributed to a change in information that meant that the formerly optimal plans were no longer optimal given the newly acquired information. But until the new information became available, the mutual consistency of optimal plans at that (fleeting) moment signified an equilibrium state. Thus, the defining characteristic of an intertemporal equilibrium in which current prices are common knowledge is that all agents share the same expectations of the future prices on which their optimal plans have been based.

There are fundamental differences between the Arrow-Debreu-McKenzie (ADM) equilibrium and the EPPPE. One difference worth mentioning is that, under the standard assumptions of the ADM model, the equilibrium is Pareto-optimal, and any Pareto-optimum allocation, by a suitable redistribution of initial endowments, could be achieved as a general equilibrium (two welfare theorems). These results do not generally hold for EPPPE, because, in contrast to the ADM model, it is possible for agents in EPPPE to acquire additional information over time, not only passively, but by investing resources in the production of information. Investing resources in the production of information can cause inefficiency in two ways: first, by creating non-convexities (owing to start-up costs in information gathering activities) that are inconsistent with the uniform competitive prices characteristic of the ADM equilibrium, and second, by creating incentives to devote resources to produce information whose value is derived from profits in trading with less well-informed agents. The latter source of inefficiency was discovered by Jack Hirshleifer in his classic 1971 paper, which I have written about in several previous posts (here, here, here, and here).

But the important feature of Radner’s EPPPE that I want to emphasize here — and what radically distinguishes it from the ADM equilibrium — is its fragility. Unlike the ADM equilibrium which is established once and forever at time zero of a model in which all production and consumption starts in period one, the EPPPE, even if it ever exists, is momentary, and is subject to unraveling whenever there is a change in the underlying information upon which current prices and expected future prices depend, and upon which agents, in choosing their optimal plans, rely. Time is not just, as it is in the ADM model, an appendage to the EPPPE, and, as a result, EPPPE can account for many phenomena, practices, and institutions that are left out of the ADM model.

The two differences that are most relevant in this context are the existence of stock markets in which shares of firms are traded based on expectations of the future net income streams associated with those firms, and the existence of a medium of exchange supplied by private financial intermediaries known as banks. In the ADM model in which all transactions are executed in time zero, in advance of all the actual consumption and production activities determined by those transactions, there would be no reason to hold, or to supply, a medium of exchange. The ADM equilibrium allows for agents to borrow or lend at equilibrium interest rates to optimize the time profiles of their consumption relative to their endowments and the time profiles of their earnings. Since all such transactions are consummated in time zero, and since, through some undefined process, the complete solvency and the integrity of all parties to all transactions is ascertained in time zero, the probability of a default on any loan contracted at time zero is zero. As a result, each agent faces a single intertemporal budget constraint at time zero over all periods from 1 to n. Walras’s Law therefore holds across all time periods for this intertemporal budget constraint, each agent transacting at the same prices in each period as every other agent does.

Once an equilibrium price vector is established in time zero, each agent knows that his optimal plan based on that price vector (which is the common knowledge of all agents) will be executed over time exactly as determined in time zero. There is no reason for any exchange of ownership shares in firms, the future income streams from each firm being known in advance.

The ADM equilibrium is a model of an economic process very different from Radner’s EPPPE, because in EPPPE, agents have no reason to assume that their current plans, even if they are momentarily both optimal and mutually consistent with the plans of all other agents, will remain optimal and consistent with the plans of all other agents. New information can arrive or be produced that will necessitate a revision in plans. Because even equilibrium plans are subject to revision, agents must take into account the solvency and credit worthiness of counterparties with whom they enter into transactions. The potentially imperfect credit-worthiness of at least some agents enables certain financial intermediaries (aka banks) to provide a service by offering to exchange their debt, which is widely considered to be more credit-worthy than the debt of ordinary agents, to agents seeking to borrow to finance purchases of either consumption or investment goods. Many agents seeking to borrow therefore prefer exchanging their debt for bank debt, bank debt being acceptable by other agents at face value. In addition, because the acquisition of new information is possible, there is a reason for agents to engage in speculative trades of commodities or assets. Such assets include ownership shares of firms, and agents may revise their valuations of those firms as they revise their expectations about future prices and their expectations about the revised plans of those firms in response to newly acquired information.

I will discuss the special role of banks at greater length in my next post on temporary equilibrium. But for now, I just want to underscore a key point: in the EPPE, unless all agents have the same expectations of future prices, Walras’s Law need not hold. The proof that Walras’s holds depends on the assumption that individual plans to buy and sell are based on the assumption that every agent buys or sells each commodity at the same price that every other transactor buys  or sells that commodity. But in the intertemporal context, in which only current, not future prices, are observed, plans for current and future prices are made based on expectations about future prices. If agents don’t share the same expectations about future prices, agents making plans for future purchases based on overly optimistic expectations about the prices at which they will be able to sell, may make commitments to buy in the future (or commitment to repay loans to finance purchases in the present) that they will be unable to discharge. Reneging on commitments to buy in the future or to repay obligations incurred in the present may rule out the existence of even a temporary equilibrium in the future.

Finally, let me add a word about Radner’s terminology. In his 1987 entry on “Uncertainty and General Equilibrium” for the New Palgrave Dictionary of Economics, (Here is a link to the revised version on line), Radner writes:

A trader’s expectations concern both future environmental events and future prices. Regarding expectations about future environmental events, there is no conceptual problem. According to the Expected Utility Hypothesis, each trader is characterized by a subjective probability measure on the set of complete histories of the environment. Since, by definition, the evolution of the environment is exogenous, a trader’s conditional probability of a future event, given the information to date, is well defined.

It is not so obvious how to proceed with regard to trader’s expectations about future prices. I shall contrast two possible approaches. In the first, which I shall call the perfect foresight approach, let us assume that the behaviour of traders is such as to determine, for each complete history of the environment, a unique corresponding sequence of price system[s]. . .

Thus, the perfect foresight approach implies that, in equilibrium, traders have common price expectation functions. These price expectation functions indicate, for each date-event pair, what the equilibrium price system would be in the corresponding market at that date event pair. . . . [I]t follows that, in equilibrium the traders would have strategies (plans) such that if these strategies were carried out, the markets would be cleared at each date-event pair. Call such plans consistent. A set of common price expectations and corresponding consistent plans is called an equilibrium of plans, prices, and price expectations.

My only problem with Radner’s formulation here is that he is defining his equilibrium concept in terms of the intrinsic capacity of the traders to predict prices rather the simple fact that traders form correct expectations. For purposes of the formal definition of EPPE, it is irrelevant whether traders predictions of future prices are correct because they are endowed with the correct model of the economy or because they are all lucky and randomly have happened simultaneously to form the same expectations of future prices. Radner also formulates an alternative version of his perfect-foresight approach in which agents don’t all share the same information. In such cases, it becomes possible for traders to make inferences about the environment by observing prices differ from what they had expected.

The situation in which traders enter the market with different non-price information presents an opportunity for agents to learn about the environment from prices, since current prices reflect, in a possibly complicated manner, the non-price information signals received by the various agents. To take an extreme example, the “inside information” of a trader in a securities market may lead him to bid up the price to a level higher than it otherwise would have been. . . . [A]n astute market observer might be able to infer that an insider has obtained some favourable information, just by careful observation of the price movement.

The ability to infer non-price information from otherwise inexplicable movements in prices leads Radner to define a concept of rational expectations equilibrium.

[E]conomic agents have the opportunity to revise their individual models in the light of observations and published data. Hence, there is a feedback from the true relationship to the individual models. An equilibrium of this system, in which the individual models are identical with the true model, is called a rational expectations equilibrium. This concept of equilibrium is more subtle, of course, that the ordinary concept of equilibrium of supply and demand. In a rational expectations equilibrium, not only are prices determined so as to equate supply and demand, but individual economic agents correctly perceive the true relationship between the non-price information received by the market participants and the resulting equilibrium market prices.

Though this discussion is very interesting from several theoretical angles, as an explanation of what is entailed by an economic equilibrium, it misses the key point, which is the one that Hayek identified in his 1928 and (especially) 1937 articles mentioned in my previous posts. An equilibrium corresponds to a situation in which all agents have identical expectations of the future prices upon which they are making optimal plans given the commonly observed current prices and the expected future prices. If all agents are indeed formulating optimal plans based on the information that they have at that moment, their plans will be mutually consistent and will be executable simultaneously without revision as long as the state of their knowledge at that instant does not change. How it happened that they arrived at identical expectations — by luck chance or supernatural powers of foresight — is irrelevant to that definition of equilibrium. Radner does acknowledge that, under the perfect-foresight approach, he is endowing economic agents with a wildly unrealistic powers of imagination and computational capacity, but from his exposition, I am unable to decide whether he grasped the subtle but crucial point about the irrelevance of an assumption about the capacities of agents to the definition of EPPPE.

Although it is capable of describing a richer set of institutions and behavior than is the Arrow-Debreu model, the perfect-foresight approach is contrary to the spirit of much of competitive market theory in that it postulates that individual traders must be able to forecast, in some sense, the equilibrium prices that will prevail in the future under all alternative states of the environment. . . .[T]his approach . . . seems to require of the traders a capacity for imagination and computation far beyond what is realistic. . . .

These last considerations lead us in a different direction, which I shall call the bounded rationality approach. . . . An example of the bounded-rationality approach is the theory of temporary equilibrium.

By eschewing any claims about the rationality of the agents or their computational powers, one can simply talk about whether agents do or do not have identical expectations of future prices and what the implications of those assumptions are. When expectations do agree, there is at least a momentary equilibrium of plans, prices and price expectations. When they don’t agree, the question becomes whether even a temporary equilibrium exists and what kind of dynamic process is implied by the divergence of expectations. That it seems to me would be a fruitful way forward for macroeconomics to follow. In my next post, I will discuss some of the characteristics and implications of a temporary-equilibrium approach to macroeconomics.

 

Hayek and Intertemporal Equilibrium

I am starting to write a paper on Hayek and intertemporal equilibrium, and as I write it over the next couple of weeks, I am going to post sections of it on this blog. Comments from readers will be even more welcome than usual, and I will do my utmost to reply to comments, a goal that, I am sorry to say, I have not been living up to in my recent posts.

The idea of equilibrium is an essential concept in economics. It is an essential concept in other sciences as well, its meaning in economics is not the same as in other disciplines. The concept having originally been borrowed from physics, the meaning originally attached to it by economists corresponded to the notion of a system at rest, and it took a long time for economists to see that viewing an economy as a system at rest was not the only, or even the most useful, way of applying the equilibrium concept to economic phenomena.

What would it mean for an economic system to be at rest? The obvious answer was to say that prices and quantities would not change. If supply equals demand in every market, and if there no exogenous change introduced into the system, e.g., in population, technology, tastes, etc., it would seem that would be no reason for the prices paid and quantities produced to change in that system. But that view of an economic system was a very restrictive one, because such a large share of economic activity – savings and investment — is predicated on the assumption and expectation of change.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative, but that was the view of equilibrium that originally took hold in economics. The idea of a stationary timeless equilibrium can be traced back to the classical economists, especially Ricardo and Mill who wrote about the long-run tendency of an economic system toward a stationary state. But it was the introduction by Jevons, Menger, Walras and their followers of the idea of optimizing decisions by rational consumers and producers that provided the key insight for a more robust and fruitful version of the equilibrium concept.

If each economic agent (household or business firm) is viewed as making optimal choices based on some scale of preferences subject to limitations or constraints imposed by their capacities, endowments, technology and the legal system, then the equilibrium of an economy must describe a state in which each agent, given his own subjective ranking of the feasible alternatives, is making a optimal decision, and those optimal decisions are consistent with those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while also being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell.

The idea of an equilibrium as a set of independently conceived, mutually consistent optimal plans was latent in the earlier notions of equilibrium, but it could not be articulated until a concept of optimality had been defined. That concept was utility maximization and it was further extended to include the ideas of cost minimization and profit maximization. Once the idea of an optimal plan was worked out, the necessary conditions for the mutual consistency of optimal plans could be articulated as the necessary conditions for a general economic equilibrium. Once equilibrium was defined as the consistency of optimal plans, the path was clear to define an intertemporal equilibrium as the consistency of optimal plans extending over time. Because current goods and services and otherwise identical goods and services in the future could be treated as economically distinct goods and services, defining the conditions for an intertemporal equilibrium was formally almost equivalent to defining the conditions for a static, stationary equilibrium. Just as the conditions for a static equilibrium could be stated in terms of equalities between marginal rates of substitution of goods in consumption and in production to their corresponding price ratios, an intertemporal equilibrium could be stated in terms of equalities between the marginal rates of intertemporal substitution in consumption and in production and their corresponding intertemporal price ratios.

The only formal adjustment required in the necessary conditions for static equilibrium to be extended to intertemporal equilibrium was to recognize that, inasmuch as future prices (typically) are unobservable, and hence unknown to economic agents, the intertemporal price ratios cannot be ratios between actual current prices and actual future prices, but, instead, ratios between current prices and expected future prices. From this it followed that for optimal plans to be mutually consistent, all economic agents must have the same expectations of the future prices in terms of which their plans were optimized.

The concept of an intertemporal equilibrium was first presented in English by F. A. Hayek in his 1937 article “Economics and Knowledge.” But it was through J. R. Hicks’s Value and Capital published two years later in 1939 that the concept became more widely known and understood. In explaining and applying the concept of intertemporal equilibrium and introducing the derivative concept of a temporary equilibrium in which current markets clear, but individual expectations of future prices are not the same, Hicks did not claim originality, but instead of crediting Hayek for the concept, or even mentioning Hayek’s 1937 paper, Hicks credited the Swedish economist Erik Lindahl, who had published articles in the early 1930s in which he had articulated the concept. But although Lindahl had published his important work on intertemporal equilibrium before Hayek’s 1937 article, Hayek had already explained the concept in a 1928 article “Das intertemporale Gleichgewichtasystem der Priese und die Bewegungen des ‘Geltwertes.'” (English translation: “Intertemporal price equilibrium and movements in the value of money.“)

Having been a junior colleague of Hayek’s in the early 1930s when Hayek arrived at the London School of Economics, and having come very much under Hayek’s influence for a few years before moving in a different theoretical direction in the mid-1930s, Hicks was certainly aware of Hayek’s work on intertemporal equilibrium, so it has long been a puzzle to me why Hicks did not credit Hayek along with Lindahl for having developed the concept of intertemporal equilibrium. It might be worth pursuing that question, but I mention it now only as an aside, in the hope that someone else might find it interesting and worthwhile to try to find a solution to that puzzle. As a further aside, I will mention that Murray Milgate in a 1979 article “On the Origin of the Notion of ‘Intertemporal Equilibrium’” has previously tried to redress the failure to credit Hayek’s role in introducing the concept of intertemporal equilibrium into economic theory.

What I am going to discuss in here and in future posts are three distinct ways in which the concept of intertemporal equilibrium has been developed since Hayek’s early work – his 1928 and 1937 articles but also his 1941 discussion of intertemporal equilibrium in The Pure Theory of Capital. Of course, the best known development of the concept of intertemporal equilibrium is the Arrow-Debreu-McKenzie (ADM) general-equilibrium model. But although it can be thought of as a model of intertemporal equilibrium, the ADM model is set up in such a way that all economic decisions are taken before the clock even starts ticking; the transactions that are executed once the clock does start simply follow a pre-determined script. In the ADM model, the passage of time is a triviality, merely a way of recording the sequential order of the predetermined production and consumption activities. This feat is accomplished by assuming that all agents are present at time zero with their property endowments in hand and capable of transacting – but conditional on the determination of an equilibrium price vector that allows all optimal plans to be simultaneously executed over the entire duration of the model — in a complete set of markets (including state-contingent markets covering the entire range of contingent events that will unfold in the course of time whose outcomes could affect the wealth or well-being of any agent with the probabilities associated with every contingent event known in advance).

Just as identical goods in different physical locations or different time periods can be distinguished as different commodities that cn be purchased at different prices for delivery at specific times and places, identical goods can be distinguished under different states of the world (ice cream on July 4, 2017 in Washington DC at 2pm only if the temperature is greater than 90 degrees). Given the complete set of state-contingent markets and the known probabilities of the contingent events, an equilibrium price vector for the complete set of markets would give rise to optimal trades reallocating the risks associated with future contingent events and to an optimal allocation of resources over time. Although the ADM model is an intertemporal model only in a limited sense, it does provide an ideal benchmark describing the characteristics of a set of mutually consistent optimal plans.

The seminal work of Roy Radner in relaxing some of the extreme assumptions of the ADM model puts Hayek’s contribution to the understanding of the necessary conditions for an intertemporal equilibrium into proper perspective. At an informal level, Hayek was addressing the same kinds of problems that Radner analyzed with far more powerful analytical tools than were available to Hayek. But the were both concerned with a common problem: under what conditions could an economy with an incomplete set of markets be said to be in a state of intertemporal equilibrium? In an economy lacking the full set of forward and state contingent markets describing the ADM model, intertemporal equilibrium cannot predetermined before trading even begins, but must, if such an equilibrium obtains, unfold through the passage of time. Outcomes might be expected, but they would not be predetermined in advance. Echoing Hayek, though to my knowledge he does not refer to Hayek in his work, Radner describes his intertemporal equilibrium under uncertainty as an equilibrium of plans, prices, and price expectations. Even if it exists, the Radner equilibrium is not the same as the ADM equilibrium, because without a full set of markets, agents can’t fully hedge against, or insure, all the risks to which they are exposed. The distinction between ex ante and ex post is not eliminated in the Radner equilibrium, though it is eliminated in the ADM equilibrium.

Additionally, because all trades in the ADM model have been executed before “time” begins, it seems impossible to rationalize holding any asset whose only use is to serve as a medium of exchange. In his early writings on business cycles, e.g., Monetary Theory and the Trade Cycle, Hayek questioned whether it would be possible to rationalize the holding of money in the context of a model of full equilibrium, suggesting that monetary exchange, by severing the link between aggregate supply and aggregate demand characteristic of a barter economy as described by Say’s Law, was the source of systematic deviations from the intertemporal equilibrium corresponding to the solution of a system of Walrasian equations. Hayek suggested that progress in analyzing economic fluctuations would be possible only if the Walrasian equilibrium method could be somehow be extended to accommodate the existence of money, uncertainty, and other characteristics of the real world while maintaining the analytical discipline imposed by the equilibrium method and the optimization principle. It proved to be a task requiring resources that were beyond those at Hayek’s, or probably anyone else’s, disposal at the time. But it would be wrong to fault Hayek for having had to insight to perceive and frame a problem that was beyond his capacity to solve. What he may be criticized for is mistakenly believing that he he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.

In Value and Capital, Hicks also expressed doubts whether it would be possible to analyze the economic fluctuations characterizing the business cycle using a model of pure intertemporal equilibrium. He proposed an alternative approach for analyzing fluctuations which he called the method of temporary equilibrium. The essence of the temporary-equilibrium method is to analyze the behavior of an economy under the assumption that all markets for current delivery clear (in some not entirely clear sense of the term “clear”) while understanding that demand and supply in current markets depend not only on current prices but also upon expected future prices, and that the failure of current prices to equal what they had been expected to be is a potential cause for the plans that economic agents are trying to execute to be modified and possibly abandoned. In the Pure Theory of Capital, Hayek discussed Hicks’s temporary-equilibrium method a possible method of achieving the modification in the Walrasian method that he himself had proposed in Monetary Theory and the Trade Cycle. But after a brief critical discussion of the method, he dismissed it for reasons that remain obscure. Hayek’s rejection of the temporary-equilibrium method seems in retrospect to have been one of Hayek’s worst theoretical — or perhaps, meta-theoretical — blunders.

Decades later, C. J. Bliss developed the concept of temporary equilibrium to show that temporary equilibrium method can rationalize both holding an asset purely for its services as a medium of exchange and the existence of financial intermediaries (private banks) that supply financial assets held exclusively to serve as a medium of exchange. In such a temporary-equilibrium model with financial intermediaries, it seems possible to model not only the existence of private suppliers of a medium of exchange, but also the conditions – in a very general sense — under which the system of financial intermediaries breaks down. The key variable of course is vectors of expected prices subject to which the plans of individual households, business firms, and financial intermediaries are optimized. The critical point that emerges from Bliss’s analysis is that there are sets of expected prices, which if held by agents, are inconsistent with the existence of even a temporary equilibrium. Thus price flexibility in current market cannot, in principle, result in even a temporary equilibrium, because there is no price vector of current price in markets for present delivery that solves the temporary-equilibrium system. Even perfect price flexibility doesn’t lead to equilibrium if the equilibrium does not exist. And the equilibrium cannot exist if price expectations are in some sense “too far out of whack.”

Expected prices are thus, necessarily, equilibrating variables. But there is no economic mechanism that tends to cause the adjustment of expected prices so that they are consistent with the existence of even a temporary equilibrium, much less a full equilibrium.

Unfortunately, modern macroeconomics continues to neglect the temporary-equilibrium method; instead macroeconomists have for the most part insisted on the adoption of the rational-expectations hypothesis, a hypothesis that elevates question-begging to the status of a fundamental axiom of rationality. The crucial error in the rational-expectations hypothesis was to misunderstand the role of the comparative-statics method developed by Samuelson in The Foundations of Economic Analysis. The role of the comparative-statics method is to isolate the pure theoretical effect of a parameter change under a ceteris-paribus assumption. Such an effect could be derived only by comparing two equilibria under the assumption of a locally unique and stable equilibrium before and after the parameter change. But the method of comparative statics is completely inappropriate to most macroeconomic problems which are precisely concerned with the failure of the economy to achieve, or even to approximate, the unique and stable equilibrium state posited by the comparative-statics method.

Moreover, the original empirical application of the rational-expectations hypothesis by Muth was in the context of the behavior of a single market in which the market was dominated by well-informed specialists who could be presumed to have well-founded expectations of future prices conditional on a relatively stable economic environment. Under conditions of macroeconomic instability, there is good reason to doubt that the accumulated knowledge and experience of market participants would enable agents to form accurate expectations of the future course of prices even in those markets about which they expert knowledge. Insofar as the rational expectations hypothesis has any claim to empirical relevance it is only in the context of stable market situations that can be assumed to be already operating in the neighborhood of an equilibrium. For the kinds of problems that macroeconomists are really trying to answer that assumption is neither relevant nor appropriate.

Roger Farmer’s Prosperity for All

I have just read a review copy of Roger Farmer’s new book Prosperity for All, which distills many of Roger’s very interesting ideas into a form which, though readable, is still challenging — at least, it was for me. There is a lot that I like and agree with in Roger’s book, and the fact that he is a UCLA economist, though he came to UCLA after my departure, is certainly a point in his favor. So I will begin by mentioning some of the things that I really liked about Roger’s book.

What I like most is that he recognizes that beliefs are fundamental, which is almost exactly what I meant when I wrote this post (“Expectations Are Fundamental”) five years ago. The point I wanted to make is that the idea that there is some fundamental existential reality that economic agents try — and, if they are rational, will — perceive is a gross and misleading oversimplification, because expectations themselves are part of reality. In a world in which expectations are fundamental, the Keynesian beauty-contest theory of expectations and stock prices (described in chapter 12 of The General Theory) is not absurd as it is widely considered to be believers in the efficient market hypothesis. The almost universal unprofitability of simple trading rules or algorithms is not inconsistent with a market process in which the causality between prices and expectations goes in both directions, in which case anticipating expectations is no less rational than anticipating future cash flows.

One of the treats of reading this book is Farmer’s recollections of his time as a graduate student at Penn in the early 1980s when David Cass, Karl Shell, and Costas Azariadis were developing their theory of sunspot equilibrium in which expectations are self-fulfilling, an idea skillfully deployed by Roger to revise the basic New Keynesian model and re-orient it along a very different path from the standard New Keynesian one. I am sympathetic to that reorientation, and the main reason for that re-orientation is that Roger rejects the idea that there is a unique equilibrium to which the economy automatically reverts, albeit somewhat more slowly than if speeded along by the appropriate monetary policy, on its own. The notion that there is a unique equilibrium to which the economy automatically reverts is an assumption with no basis in theory or experience. The most that the natural-rate hypothesis can tell us is that if an economy is operating at its natural rate of unemployment, monetary expansion cannot permanently reduce the rate of unemployment below that natural rate. Eventually — once economic agents come to expect that the monetary expansion and the correspondingly higher rate of inflation will be maintained indefinitely — the unemployment rate must revert to the natural rate. But the natural-rate hypothesis does not tell us that monetary expansion cannot reduce unemployment when the actual unemployment rate exceeds the natural rate, although it is often misinterpreted as making that assertion.

In his book, Roger takes the anti-natural-rate argument a step further, asserting that the natural rate of unemployment rate is not unique. There is actually a range of unemployment rates at which the economy can permanently remain; which of those alternative natural rates the economy winds up at depends on the expectations held by the public about nominal future income. The higher expected future income, the greater consumption spending and, consequently, the greater employment. Things are a bit more complicated than I have just described them, because Roger also believes that consumption depends not on current income but on wealth. However, in the very simplified model with which Roger operates, wealth depends on expectations about future income. The more optimistic people are about their income-earning opportunities, the higher asset values; the higher asset values, the wealthier the public, and the greater consumption spending. The relationship between current income and expected future income is what Roger calls the belief function.

Thus, Roger juxtaposes a simple New Keynesian model against his own monetary model. The New Keynesian model consists of 1) an investment equals saving equilibrium condition (IS curve) describing the optimal consumption/savings decision of the representative individual as a locus of combinations of expected real interest rates and real income, based on the assumed rate of time preference of the representative individual, expected future income, and expected future inflation; 2) a Taylor rule describing how the monetary authority sets its nominal interest rate as a function of inflation and the output gap and its target (natural) nominal interest rate; 3) a short-run Phillips Curve that expresses actual inflation as a function of expected future inflation and the output gap. The three basic equations allow three endogenous variables, inflation, real income and the nominal rate of interest to be determined. The IS curve represents equilibrium combinations of real income and real interest rates; the Taylor rule determines a nominal interest rate; given the nominal rate determined by the Taylor rule, the IS curve can be redrawn to represent equilibrium combinations of real income and inflation. The intersection of the redrawn IS curve with the Phillips curve determines the inflation rate and real income.

Roger doesn’t like the New Keynesian model because he rejects the notion of a unique equilibrium with a unique natural rate of unemployment, a notion that I have argued is theoretically unfounded. Roger dismisses the natural-rate hypothesis on empirical grounds, the frequent observations of persistently high rates of unemployment being inconsistent with the idea that there are economic forces causing unemployment to revert back to the natural rate. Two responses to this empirical anomaly are possible: 1) the natural rate of unemployment is unstable, so that the observed persistence of high unemployment reflect increases in the underlying but unobservable natural rate of unemployment; 2) the adverse economic shocks that produce high unemployment are persistent, with unemployment returning to a natural level only after the adverse shocks have ceased. In the absence of independent empirical tests of the hypothesis that the natural rate of unemployment has changed, or of the hypothesis that adverse shocks causing unemployment to rise above the natural rate are persistent, neither of these responses is plausible, much less persuasive.

So Roger recasts the basic New Keynesian model in a very different form. While maintaining the Taylor Rule, he rewrites the IS curve so that it describes a relationship between the nominal interest rate and the expected growth of nominal income given the assumed rate of time preference, and in place of the Phillips Curve, he substitutes his belief function, which says that the expected growth of nominal income in the next period equals the current rate of growth. The IS curve and the Taylor Rule provide two steady state equations in three variables, nominal income growth, nominal interest rate and inflation, so that the rate of inflation is left undetermined. Once the belief function specifies the expected rate of growth of nominal income, the nominal interest rate consistent with expected nominal-income growth is determined. Since the belief function tells us only that the expected nominal-income growth equals the current rate of nominal-income growth, any change in nominal-income growth persists into the next period.

At any rate, Roger’s policy proposal is not to change the interest-rate rule followed by the monetary authority, but to propose a rule whereby the monetary authority influences the public’s expectations of nominal-income growth. The greater expected nominal-income growth, the greater wealth, and the greater consumption expenditures. The greater consumption expenditures, the greater income and employment. Expectations are self-fulfilling. Roger therefore advocates a policy by which the government buys and sells a stock-market index fund in order to keep overall wealth at a level that will generate enough consumption expenditures to support maximum sustainable employment.

This is a quick summary of some of the main substantive arguments that Roger makes in his book, and I hope that I have not misrepresented them too badly. As I have already said, I very much sympathize with his criticism of the New Keynesian model, and I agree with nearly all of his criticisms. I also agree wholeheartedly with his emphasis on the importance of expectations and on self-fulfilling character of expectations. Nevertheless, I have to admit that I have trouble taking Roger’s own monetary model and his policy proposal for stabilizing a broad index of equity prices over time seriously. And the reason I am so skeptical about Roger’s model and his policy recommendation is that his model, which does after all bear at least a family resemblance to the simple New Keynesian model, strikes me as being far too simplified to be credible as a representation of a real-world economy. His model, like the New Keynesian model, is an intertemporal model with neither money nor real capital, and the idea that there is an interest rate in such model is, though theoretically defensible, not very plausible. There may be a sequence of periods in such a model in which some form of intertemporal exchange takes place, but without explicitly introducing at least one good that is carried over from period to period, the extent of intertemporal trading is limited and devoid of the arbitrage constraints inherent in a system in which real assets are held from one period to the next.

So I am very skeptical about any macroeconomic model with no market for real assets so that the interest rate interacts with asset values and expected future prices in such a way that the existing stock of durable assets is willingly held over time. The simple New Keynesian model in which there is no money and no durable assets, but simply bonds whose existence is difficult to rationalize in the absence of money or durable assets, does not strike me as a sound foundation for making macroeconomic policy. An interest rate may exist in such a model, but such a model strikes me as woefully inadequate for macroeconomic policy analysis. And although Roger has certainly offered some interesting improvements on the simple New Keynesian model, I would not be willing to rely on Roger’s monetary model for the sweeping policy and institutional recommendations that he proposes, especially his proposal for stabilizing the long-run growth path of a broad index of stock prices.

This is an important point, so I will try to restate it within a wider context. Modern macroeconomics, of which Roger’s model is one of the more interesting examples, flatters itself by claiming to be grounded in the secure microfoundations of the Arrow-Debreu-McKenzie general equilibrium model. But the great achievement of the ADM model was to show the logical possibility of an equilibrium of the independently formulated, optimizing plans of an unlimited number of economic agents producing and trading an unlimited number of commodities over an unlimited number of time periods.

To prove the mutual consistency of such a decentralized decision-making process coordinated by a system of equilibrium prices was a remarkable intellectual achievement. Modern macroeconomics deceptively trades on the prestige of this achievement in claiming to be founded on the ADM general-equilibrium model; the claim is at best misleading, because modern macroeconomics collapses the multiplicity of goods, services, and assets into a single non-durable commodity, so that the only relevant plan the agents in the modern macromodel are called upon to make is a decision about how much to spend in the current period given a shared utility function and a shared production technology for the single output. In the process, all the hard work performed by the ADM general-equilibrium model in explaining how a system of competitive prices could achieve an equilibrium of the complex independent — but interdependent — intertemporal plans of a multitude of decision-makers is effectively discarded and disregarded.

This approach to macroeconomics is not microfounded, but its opposite. The approach relies on the assumption that all but a very small set of microeconomic issues are irrelevant to macroeconomics. Now it is legitimate for macroeconomics to disregard many microeconomic issues, but the assumption that there is continuous microeconomic coordination, apart from the handful of potential imperfections on which modern macroeconomics chooses to focus is not legitimate. In particular, to collapse the entire economy into a single output, implies that all the separate markets encompassed by an actual economy are in equilibrium and that the equilibrium is maintained over time. For that equilibrium to be maintained over time, agents must formulate correct expectations of all the individual relative prices that prevail in those markets over time. The ADM model sidestepped that expectational problem by assuming that a full set of current and forward markets exists in the initial period and that all the agents participating in the economy are present and endowed with wealth enabling them to trade in the initial period. Under those rather demanding assumptions, if an equilibrium price vector covering all current and future markets is arrived at, the optimizing agents will formulate a set of mutually consistent optimal plans conditional on that vector of equilibrium prices so that all the optimal plans can and will be carried out as time happily unfolds for as long as the agents continue in their blissful existence.

However, without a complete set of current and forward markets, achieving the full equilibrium of the ADM model requires that agents formulate consistent expectations of the future prices that will be realized only over the course of time not in the initial period. Roy Radner, who extended the ADM model to accommodate the case of incomplete markets, called such a sequential equilibrium, an equilibrium of plans, prices and expectations. The sequential equilibrium described by Radner has the property that expectations are rational, but the assumption of rational expectations for all future prices over a sequence of future time periods is so unbelievably outlandish as an approximation to reality — sort of like the assumption that it could be 76 degrees fahrenheit in Washington DC in February — that to build that assumption into a macroeconomic model is an absurdity of mind-boggling proportions. But that is precisely what modern macroeconomics, in both its Real Business Cycle and New Keynesian incarnations, has done.

If instead of the sequential equilibrium of plans, prices and expectations, one tries to model an economy in which the price expectations of agents can be inconsistent, while prices adjust within any period to clear markets – the method of temporary equilibrium first described by Hicks in Value and Capital – one can begin to develop a richer conception of how a macroeconomic system can be subject to the financial disturbances, and financial crises to which modern macroeconomies are occasionally, if not routinely, vulnerable. But that would require a reorientation, if not a repudiation, of the path on which macroeconomics has been resolutely marching for nigh on forty years. In his 1984 paper “Consistent Temporary Equilibrium,” published in a volume edited by J. P. Fitoussi, C. J. Bliss made a start on developing such a macroeconomic theory.

There are few economists better equipped than Roger Farmer to lead macroeconomics onto a new and more productive path. He has not done so in this book, but I am hoping that, in his next one, he will.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com